title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 41. PodDisruptionBudgetTemplate schema reference
Chapter 41. PodDisruptionBudgetTemplate schema reference Used in: CruiseControlTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaMirrorMakerTemplate , ZookeeperClusterTemplate Full list of PodDisruptionBudgetTemplate schema properties A PodDisruptionBudget (PDB) is an OpenShift resource that ensures high availability by specifying the minimum number of pods that must be available during planned maintenance or upgrades. AMQ Streams creates a PDB for every new StrimziPodSet or Deployment . By default, the PDB allows only one pod to be unavailable at any given time. You can increase the number of unavailable pods allowed by changing the default value of the maxUnavailable property. StrimziPodSet custom resources manage pods using a custom controller that cannot use the maxUnavailable value directly. Instead, the maxUnavailable value is automatically converted to a minAvailable value when creating the PDB resource, which effectively serves the same purpose, as illustrated in the following examples: If there are three broker pods and the maxUnavailable property is set to 1 in the Kafka resource, the minAvailable setting is 2 , allowing one pod to be unavailable. If there are three broker pods and the maxUnavailable property is set to 0 (zero), the minAvailable setting is 3 , requiring all three broker pods to be available and allowing zero pods to be unavailable. Example PodDisruptionBudget template configuration # ... template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1 # ... 41.1. PodDisruptionBudgetTemplate schema properties Property Description metadata Metadata to apply to the PodDisruptionBudgetTemplate resource. MetadataTemplate maxUnavailable Maximum number of unavailable pods to allow automatic Pod eviction. A Pod eviction is allowed when the maxUnavailable number of pods or fewer are unavailable after the eviction. Setting this value to 0 prevents all voluntary evictions, so the pods must be evicted manually. Defaults to 1. integer
[ "template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-poddisruptionbudgettemplate-reference
Chapter 6. Getting Started with OptaPlanner and Quarkus
Chapter 6. Getting Started with OptaPlanner and Quarkus You can use the https://code.quarkus.redhat.com website to generate a Red Hat build of OptaPlanner Quarkus Maven project and automatically add and configure the extensions that you want to use in your application. You can then download the Quarkus Maven repository or use the online Maven repository with your project. 6.1. Apache Maven and Red Hat build of Quarkus Apache Maven is a distributed build automation tool used in Java application development to create, manage, and build software projects. Maven uses standard configuration files called Project Object Model (POM) files to define projects and manage the build process. POM files describe the module and component dependencies, build order, and targets for the resulting project packaging and output using an XML file. This ensures that the project is built in a correct and uniform manner. Maven repositories A Maven repository stores Java libraries, plug-ins, and other build artifacts. The default public repository is the Maven 2 Central Repository, but repositories can be private and internal within a company to share common artifacts among development teams. Repositories are also available from third parties. You can use the online Maven repository with your Quarkus projects or you can download the Red Hat build of Quarkus Maven repository. Maven plug-ins Maven plug-ins are defined parts of a POM file that achieve one or more goals. Quarkus applications use the following Maven plug-ins: Quarkus Maven plug-in ( quarkus-maven-plugin ): Enables Maven to create Quarkus projects, supports the generation of uber-JAR files, and provides a development mode. Maven Surefire plug-in ( maven-surefire-plugin ): Used during the test phase of the build lifecycle to execute unit tests on your application. The plug-in generates text and XML files that contain the test reports. 6.1.1. Configuring the Maven settings.xml file for the online repository You can use the online Maven repository with your Maven project by configuring your user settings.xml file. This is the recommended approach. Maven settings used with a repository manager or repository on a shared server provide better control and manageability of projects. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Procedure Open the Maven ~/.m2/settings.xml file in a text editor or integrated development environment (IDE). Note If there is not a settings.xml file in the ~/.m2/ directory, copy the settings.xml file from the USDMAVEN_HOME/.m2/conf/ directory into the ~/.m2/ directory. Add the following lines to the <profiles> element of the settings.xml file: <!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> 6.1.2. Downloading and configuring the Quarkus Maven repository If you do not want to use the online Maven repository, you can download and configure the Quarkus Maven repository to create a Quarkus application with Maven. The Quarkus Maven repository contains many of the requirements that Java developers typically use to build their applications. This procedure describes how to edit the settings.xml file to configure the Quarkus Maven repository. Note When you configure the repository by modifying the Maven settings.xml file, the changes apply to all of your Maven projects. Procedure Download the Red Hat build of Quarkus Maven repository ZIP file from the Software Downloads page of the Red Hat Customer Portal (login required). Expand the downloaded archive. Change directory to the ~/.m2/ directory and open the Maven settings.xml file in a text editor or integrated development environment (IDE). Add the following lines to the <profiles> element of the settings.xml file, where QUARKUS_MAVEN_REPOSITORY is the path of the Quarkus Maven repository that you downloaded. The format of QUARKUS_MAVEN_REPOSITORY must be file://USDPATH , for example file:///home/userX/rh-quarkus-2.13.GA-maven-repository/maven-repository . <!-- Configure the Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> Add the following lines to the <activeProfiles> element of the settings.xml file and save the file. <activeProfile>red-hat-enterprise-maven-repository</activeProfile> Important If your Maven repository contains outdated artifacts, you might encounter one of the following Maven error messages when you build or deploy your project, where ARTIFACT_NAME is the name of a missing artifact and PROJECT_NAME is the name of the project you are trying to build: Missing artifact PROJECT_NAME [ERROR] Failed to execute goal on project ARTIFACT_NAME ; Could not resolve dependencies for PROJECT_NAME To resolve the issue, delete the cached version of your local repository located in the ~/.m2/repository directory to force a download of the latest Maven artifacts. 6.2. Creating an OptaPlanner Red Hat build of Quarkus Maven project using the Maven plug-in You can get up and running with a Red Hat build of OptaPlanner and Quarkus application using Apache Maven and the Quarkus Maven plug-in. Prerequisites OpenJDK 11 or later is installed. Red Hat build of Open JDK is available from the Software Downloads page in the Red Hat Customer Portal (login required). Apache Maven 3.6 or higher is installed. Maven is available from the Apache Maven Project website. Procedure In a command terminal, enter the following command to verify that Maven is using JDK 11 and that the Maven version is 3.6 or higher: If the preceding command does not return JDK 11, add the path to JDK 11 to the PATH environment variable and enter the preceding command again. To generate a Quarkus OptaPlanner quickstart project, enter the following command: mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create \ -DprojectGroupId=com.example \ -DprojectArtifactId=optaplanner-quickstart \ -Dextensions="resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson" \ -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 \ -DnoExamples This command create the following elements in the ./optaplanner-quickstart directory: The Maven structure Example Dockerfile file in src/main/docker The application configuration file Table 6.1. Properties used in the mvn io.quarkus:quarkus-maven-plugin:2.13.Final-redhat-00006:create command Property Description projectGroupId The group ID of the project. projectArtifactId The artifact ID of the project. extensions A comma-separated list of Quarkus extensions to use with this project. For a full list of Quarkus extensions, enter mvn quarkus:list-extensions on the command line. noExamples Creates a project with the project structure but without tests or classes. The values of the projectGroupID and the projectArtifactID properties are used to generate the project version. The default project version is 1.0.0-SNAPSHOT . To view your OptaPlanner project, change directory to the OptaPlanner Quickstarts directory: Review the pom.xml file. The content should be similar to the following example: 6.3. Creating a Red Hat build of Quarkus Maven project using code.quarkus.redhat.com You can use the code.quarkus.redhat.com website to generate a Red Hat build of OptaPlanner Quarkus Maven project and automatically add and configure the extensions that you want to use in your application. In addition, code.quarkus.redhat.com automatically manages the configuration parameters required to compile your project into a native executable. This section walks you through the process of generating an OptaPlanner Maven project and includes the following topics: Specifying basic details about your application. Choosing the extensions that you want to include in your project. Generating a downloadable archive with your project files. Using the custom commands for compiling and starting your application. Prerequisites You have a web browser. Procedure Open https://code.quarkus.redhat.com in your web browser: Specify details about your project: Enter a group name for your project. The format of the name follows the Java package naming convention, for example, com.example . Enter a name that you want to use for Maven artifacts generated from your project, for example code-with-quarkus . Select Build Tool > Maven to specify that you want to create a Maven project. The build tool that you choose determines the items: The directory structure of your generated project The format of configuration files used in your generated project The custom build script and command for compiling and starting your application that code.quarkus.redhat.com displays for you after you generate your project Note Red Hat provides support for using code.quarkus.redhat.com to create OptaPlanner Maven projects only. Generating Gradle projects is not supported by Red Hat. Enter a version to be used in artifacts generated from your project. The default value of this field is 1.0.0-SNAPSHOT . Using semantic versioning is recommended, but you can use a different type of versioning if you prefer. Enter the package name of artifacts that the build tool generates when you package your project. According to the Java package naming conventions the package name should match the group name that you use for your project, but you can specify a different name. Note The code.quarkus.redhat.com website automatically uses the latest release of OptaPlanner. You can manually change the BOM version in the pom.xml file after you generate your project. Select the following extensions to include as dependencies: RESTEasy JAX-RS (quarkus-resteasy) RESTEasy Jackson (quarkus-resteasy-jackson) OptaPlanner AI constraint solver(optaplanner-quarkus) OptaPlanner Jackson (optaplanner-quarkus-jackson) Red Hat provides different levels of support for individual extensions on the list, which are indicated by labels to the name of each extension: SUPPORTED extensions are fully supported by Red Hat for use in enterprise applications in production environments. TECH-PREVIEW extensions are subject to limited support by Red Hat in production environments under the Technology Preview Features Support Scope . DEV-SUPPORT extensions are not supported by Red Hat for use in production environments, but the core functionalities that they provide are supported by Red Hat developers for use in developing new applications. DEPRECATED extension are planned to be replaced with a newer technology or implementation that provides the same functionality. Unlabeled extensions are not supported by Red Hat for use in production environments. Select Generate your application to confirm your choices and display the overlay screen with the download link for the archive that contains your generated project. The overlay screen also shows the custom command that you can use to compile and start your application. Select Download the ZIP to save the archive with the generated project files to your system. Extract the contents of the archive. Navigate to the directory that contains your extracted project files: cd <directory_name> Compile and start your application in development mode: ./mvnw compile quarkus:dev 6.4. Creating a Red Hat build of Quarkus Maven project using the Quarkus CLI You can use the Quarkus command line interface (CLI) to create a Quarkus OptaPlanner project. Prerequisites You have installed the Quarkus CLI. For information, see Building Quarkus Apps with Quarkus Command Line Interface . Procedure Create a Quarkus application: To view the available extensions, enter the following command: This command returns the following extensions: Enter the following command to add extensions to the project's pom.xml file: Open the pom.xml file in a text editor. The contents of the file should look similar to the following example:
[ "<!-- Configure the Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>", "<activeProfile>red-hat-enterprise-maven-repository</activeProfile>", "<!-- Configure the Quarkus Maven repository --> <profile> <id>red-hat-enterprise-maven-repository</id> <repositories> <repository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-enterprise-maven-repository</id> <url> QUARKUS_MAVEN_REPOSITORY </url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>", "<activeProfile>red-hat-enterprise-maven-repository</activeProfile>", "mvn --version", "mvn com.redhat.quarkus.platform:quarkus-maven-plugin:2.13.Final-redhat-00006:create -DprojectGroupId=com.example -DprojectArtifactId=optaplanner-quickstart -Dextensions=\"resteasy,resteasy-jackson,optaplanner-quarkus,optaplanner-quarkus-jackson\" -DplatformGroupId=com.redhat.quarkus.platform -DplatformVersion=2.13.Final-redhat-00006 -DnoExamples", "cd optaplanner-quickstart", "<dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>quarkus-optaplanner-bom</artifactId> <version>2.13.Final-redhat-00006</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> </dependencies>", "cd <directory_name>", "./mvnw compile quarkus:dev", "quarkus create app -P io.quarkus:quarkus-bom:2.13.Final-redhat-00006", "quarkus ext -i", "optaplanner-quarkus optaplanner-quarkus-benchmark optaplanner-quarkus-jackson optaplanner-quarkus-jsonb", "quarkus ext add resteasy-jackson quarkus ext add optaplanner-quarkus quarkus ext add optaplanner-quarkus-jackson", "<?xml version=\"1.0\"?> <project xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd\" xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"> <modelVersion>4.0.0</modelVersion> <groupId>org.acme</groupId> <artifactId>code-with-quarkus-optaplanner</artifactId> <version>1.0.0-SNAPSHOT</version> <properties> <compiler-plugin.version>3.8.1</compiler-plugin.version> <maven.compiler.parameters>true</maven.compiler.parameters> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>io.quarkus</quarkus.platform.group-id> <quarkus.platform.version>2.13.Final-redhat-00006</quarkus.platform.version> <surefire-plugin.version>3.0.0-M5</surefire-plugin.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>USD{quarkus.platform.artifact-id}</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>io.quarkus.platform</groupId> <artifactId>optaplanner-quarkus</artifactId> <version>2.2.2.Final</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-arc</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus</artifactId> </dependency> <dependency> <groupId>org.optaplanner</groupId> <artifactId>optaplanner-quarkus-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jackson</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>io.rest-assured</groupId> <artifactId>rest-assured</artifactId> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>USD{quarkus.platform.version}</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>USD{compiler-plugin.version}</version> <configuration> <parameters>USD{maven.compiler.parameters}</parameters> </configuration> </plugin> <plugin> <artifactId>maven-surefire-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <configuration> <systemPropertyVariables> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </plugin> </plugins> </build> <profiles> <profile> <id>native</id> <activation> <property> <name>native</name> </property> </activation> <build> <plugins> <plugin> <artifactId>maven-failsafe-plugin</artifactId> <version>USD{surefire-plugin.version}</version> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> <configuration> <systemPropertyVariables> <native.image.path>USD{project.build.directory}/USD{project.build.finalName}-runner</native.image.path> <java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager> <maven.home>USD{maven.home}</maven.home> </systemPropertyVariables> </configuration> </execution> </executions> </plugin> </plugins> </build> <properties> <quarkus.package.type>native</quarkus.package.type> </properties> </profile> </profiles> </project>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/optaplanner-quarkus-con_getting-started-optaplanner
Appendix A. Using LDAP Client Tools
Appendix A. Using LDAP Client Tools Red Hat Directory Server uses the LDAP tools (such as ldapsearch and ldapmodify ) supplied with OpenLDAP. The OpenLDAP tool options are described in the OpenLDAP man pages at http://www.openldap.org/software/man.cgi . This appendix gives some common usage scenarios and examples for using these LDAP tools. More extensive examples for using ldapsearch are given in Chapter 14, Finding Directory Entries . More examples for using ldapmodify and ldapdelete are given in Chapter 3, Managing Directory Entries . A.1. Running Extended Operations Red Hat Directory Server supports a variety of extended operations, especially extended search operations. An extended operation passes an additional operation (such as a get effective rights search or server-side sort) along with the LDAP operation. Likewise, LDAP clients have the potential to support a number of extended operations. The OpenLDAP LDAP tools support extended operations in two ways. All client tools ( ldapmodify , ldapsearch , and the others) use either the -e or -E options to send an extended operation. The -e argument can be used with any OpenLDAP client tool and sends general instructions about the operation, like how to handle password policies. The -E is used only with ldapsearch es and passes more useful controls like GER searches, sort and page information, and information for other, not-explicitly-support extended operations. Additionally, OpenLDAP has another tool, ldapexop , which is used exclusively to perform extended search operations, the same as running ldapsearch -E . The format of an extended operation with ldapsearch is generally: When an extended operation is explicitly handled by the OpenLDAP tools, then the extended_operation_type can be an alias, like deref for a dereference search or sss for server-side sorting. A supported extended operation has formatted output. Other extended operations, like GER searches, are passed using their OID rather than an alias, and then the extended_operation_type is the OID. For those unsupported operations the tool does not recognize the response from the server, so the output is unformatted. For example, the pg extended operation type formats the results in simple pages: The same operation with ldapexop can be run using only the OID of the simple paged results operation and the operation's settings (3 results per page): However, ldapexop does not accept the same range of search parameters that ldapsearch does, making it less flexible.
[ "-E extended_operation_type = operation_parameters", "ldapsearch -x -D \"cn=Directory Manager\" -W -b \"ou=Engineers,ou=People,dc=example,dc=com\" -E pg=3 \"(objectclass=*)\" cn dn: uid=jsmith,ou=Engineers,ou=People,dc=example,dc=com cn: John Smith dn: uid=bjensen,ou=Engineers,ou=People,dc=example,dc=com cn: Barbara Jensen dn: uid=hmartin,ou=Engineers,ou=People,dc=example,dc=com cn: Henry Martin Results are sorted. next page size (3): 5", "ldapexop 1.2.840.113556.1.4.319=3" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/ldap-tools-examples
15.11. Displaying Network I/O
15.11. Displaying Network I/O To view the network I/O for all virtual machines on your system: Make sure that the Network I/O statistics collection is enabled. To do this, from the Edit menu, select Preferences and click the Stats tab. Select the Network I/O check box. Figure 15.27. Enabling Network I/O To display the Network I/O statistics, from the View menu, select Graph , then the Network I/O check box. Figure 15.28. Selecting Network I/O The Virtual Machine Manager shows a graph of Network I/O for all virtual machines on your system. Figure 15.29. Displaying Network I/O
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-managing_guests_with_the_virtual_machine_manager_virt_manager-displaying_network_io
3.8. VDB Dependencies
3.8. VDB Dependencies When deploying a virtual database (VDB) in JBoss Data Virtualization, you also have to provide dependency libraries and configuration settings for accessing the physical data sources used by your VDB. (You can identify all dependent physical data sources by looking in META-INF/vdb.xml within the EAP_HOME/MODE /deployments/ DATABASE .vdb file.) For example, if you are trying to integrate Oracle and file sources in your VDB, then you are responsible for providing both the JDBC driver for the Oracle source, and any necessary documents and configuration files that are needed by the file translator. Data source instances may be shared between multiple VDBs and applications. Consider sharing connections to sources that are heavy-weight and resource-constrained. Once you have deployed the VDB and its dependencies, client applications can connect using the JDBC API. If there are any errors in the deployment, the connection attempt will fail and a message will be logged. Use the Management Console (or check the log files) to identify any errors and correct them so you can proceed. See Red Hat JBoss Data Virtualization Development Guide: Server Development for information on how to use JDBC to connect to your VDB. Warning Some data source configuration files may contain passwords or other sensitive information. For instructions on how to avoid storing passwords in plaintext, refer to the JBoss Enterprise Application Platform Security Guide .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/vdb_dependencies
Chapter 2. Creating definitions
Chapter 2. Creating definitions When creating an automated rule definition, you can configure numerous options. Cryostat uses an automated rule to apply rules to any JVM targets that match regular expressions defined in the matchExpression string expression. You can apply Red Hat OpenShift labels or annotations as criteria for a matchExpression definition. After you specify a rule definition for your automated rule, you do not need to re-add or restart matching targets. If you have defined matching targets, you can immediately activate a rule definition. If you want to reuse an existing automated rule definition, you can upload your definition in JSON format to Cryostat. 2.1. Enabling or disabling existing automated rules You can enable or disable existing automated rules by using a toggle switch on the Cryostat web console. Prerequisites Logged in to the Cryostat web console. Created an automated rule. Procedure From the Cryostat web console, click Automated Rules . The Automated Rules window opens and displays your automated rule in a table. Figure 2.1. Example of match expression output from completing an automated rule In the Enabled column, view the Enabled status of the listed automated rules. Depending on the status, choose one of the following actions: To enable the automated rule, click the toggle switch to On . Cryostat immediately evaluates each application that you defined in the automated rule against its match expression. If a match expression applies to an application, Cryostat starts a JFR recording that monitors the performance of the application. To disable the automated rule, click the toggle switch to Off . The Disable your Automated Rule window opens. To disable the selected automated rule, click Disable . If you want to also stop any active recordings that were created by the selected rule, select Clean then click Disable . 2.2. Creating an automated rule definition While creating an automated rule on the Cryostat web console, you can specify the match expression that Cryostat uses to select all the applications. Then, Cryostat starts a new recording by using a JFR event template that was defined by the rule. If you previously created an automated rule and Cryostat identifies a new target application, Cryostat tests if the new application instance matches the expression and starts a new recording by using the associated event template. Prerequisites Created a Cryostat instance in your Red Hat OpenShift project. Created a Java application. Installed Cryostat 2.4 on Red Hat OpenShift by using the OperatorHub option. Logged in to your Cryostat web console. Procedure In the navigation menu on the Cryostat web console, click Automated Rules . The Automated Rules window opens. Click Create . A Create window opens. Figure 2.2. The Create window (Graph View) for an automated rule Enter a rule name in the Name field. In the Match Expression field, specify the match expression details. Note Select the question mark icon to view suggested syntax in a Match Expression Hint snippet. In the Match Expression Visualizer panel, the Graph View option highlights the target JVMs that are matched. Unmatched target JVMs are greyed out. Optional: In the Match Expression Visualizer panel, you can also click List View , which displays the matched target JVMs as expandable rows. Figure 2.3. The Create window (List View) for an automated rule From the Template list, select an event template. To create your automated rule, click Create . The Automated Rules window opens and displays your automated rule in a table. Figure 2.4. Example of match expression output from completing an automated rule If a match expression applies to an application, Cryostat starts a JFR recording that monitors the performance of the application. Optional: You can download an automated rule by clicking Download from the automated rule's overflow menu. You can then configure a rule definition in your preferred text editor or make additional copies of the file on your local file system. 2.3. Cryostat Match Expression Visualizer panel You can use the Match Expression Visualizer panel on the web console to view information in a JSON structure for your selected target JVM application. You can choose to display the information in a Graph View or a List View mode. The Graph View highlights the target JVMs that are matched. Unmatched target JVMs are greyed out. The List View displays the matched target JVM as expandable rows. To view details about a matched target JVM, select the target JVM that is highlighted. In the window that appears, information specific to the metadata for your application is shown in the Details tab. You can use any of this information as syntax in your match expression. A match expression is a rule definition parameter that you can specify for your automated rule. After you specify match expressions and created the automated rule, Cryostat immediately evaluates each application that you defined in the automated rule against its match expression. If a match expression applies to an application, Cryostat starts a JFR recording that monitors the performance of the application. 2.4. Uploading an automated rule in JSON You can reuse an existing automated rule by uploading it to the Cryostat web console, so that you can quickly start monitoring a running Java application. Prerequisites Created a Cryostat instance in your project. See Installing Cryostat on OpenShift using an operator (Installing Cryostat). Created a Java application. Created an automated rules file in JSON format. Logged in to your Cryostat web console. Procedure In the navigation menu on the Cryostat web console, click Automated Rules . The Automated Rules window opens. Click the file upload icon, which is located beside the Create button. Figure 2.5. The automated rules upload button The Upload Automated Rules window opens. Click Upload and locate your automated rules files on your local system. You can upload one or more files to Cryostat. Alternatively, you can drag files from your file explorer tool and drop them into the JSON File field on your web console. Important The Upload Automated Rules function only accepts files in JSON format. Figure 2.6. A window prompt where you can upload JSON files that contains your automated rules configuration Optional: If you need to remove a file from the Upload Automated Rules function, click the X icon on the selected file. Figure 2.7. Example of uploaded JSON files Click Submit . 2.5. Metadata labels When you create an automated rule to enable JFR to continuously monitor a running target application, the automated rule automatically generates a metadata label. This metadata label indicates the name of the automated rule that generates the JFR recording. After you archive the recording, you can run a query on the metadata label to locate the automated rule that generated the recording. Cryostat preserves metadata labels for the automated rule in line with the lifetime of the archived recording. Additional resources Creating definitions Archiving JDK Flight Recorder (JFR) recordings (Using Cryostat to manage a JFR recording)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_automated_rules_on_cryostat/assembly_creating-definitions_con_overview-automated-rules
Chapter 17. Impersonating the system:admin user
Chapter 17. Impersonating the system:admin user 17.1. API impersonation You can configure a request to the OpenShift Container Platform API to act as though it originated from another user. For more information, see User impersonation in the Kubernetes documentation. 17.2. Impersonating the system:admin user You can grant a user permission to impersonate system:admin , which grants them cluster administrator permissions. Procedure To grant a user permission to impersonate system:admin , run the following command: USD oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username> Tip You can alternatively apply the following YAML to grant permission to impersonate system:admin : apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username> 17.3. Impersonating the system:admin group When a system:admin user is granted cluster administration permissions through a group, you must include the --as=<user> --as-group=<group1> --as-group=<group2> parameters in the command to impersonate the associated groups. Procedure To grant a user permission to impersonate a system:admin by impersonating the associated cluster administration groups, run the following command: USD oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> \ --as-group=<group1> --as-group=<group2>
[ "oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --user=<username>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: <any_valid_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: sudoer subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: <username>", "oc create clusterrolebinding <any_valid_name> --clusterrole=sudoer --as=<user> --as-group=<group1> --as-group=<group2>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/authentication_and_authorization/impersonating-system-admin
Chapter 106. KafkaUserScramSha512ClientAuthentication schema reference
Chapter 106. KafkaUserScramSha512ClientAuthentication schema reference Used in: KafkaUserSpec The type property is a discriminator that distinguishes use of the KafkaUserScramSha512ClientAuthentication type from KafkaUserTlsClientAuthentication , KafkaUserTlsExternalClientAuthentication . It must have the value scram-sha-512 for the type KafkaUserScramSha512ClientAuthentication . Property Property type Description password Password Specify the password for the user. If not set, a new password is generated by the User Operator. type string Must be scram-sha-512 .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkauserscramsha512clientauthentication-reference
20.2. Types
20.2. Types The main permission control method used in SELinux targeted policy to provide advanced process isolation is Type Enforcement. All files and processes are labeled with a type: types define a SELinux domain for processes and a SELinux type for files. SELinux policy rules define how types access each other, whether it be a domain accessing a type, or a domain accessing another domain. Access is only allowed if a specific SELinux policy rule exists that allows it. The following types are used with mysqld . Different types allow you to configure flexible access: mysqld_db_t This type is used for the location of the MariaDB database. In Red Hat Enterprise Linux, the default location for the database is the /var/lib/mysql/ directory, however this can be changed. If the location for the MariaDB database is changed, the new location must be labeled with this type. See the example in Section 20.4.1, "MariaDB Changing Database Location" for instructions on how to change the default database location and how to label the new section appropriately. mysqld_etc_t This type is used for the MariaDB main configuration file /etc/my.cnf and any other configuration files in the /etc/mysql/ directory. mysqld_exec_t This type is used for the mysqld binary located at /usr/libexec/mysqld , which is the default location for the MariaDB binary on Red Hat Enterprise Linux. Other systems may locate this binary at /usr/sbin/mysqld which should also be labeled with this type. mysqld_unit_file_t This type is used for executable MariaDB-related files located in the /usr/lib/systemd/system/ directory by default in Red Hat Enterprise Linux. mysqld_log_t Logs for MariaDB need to be labeled with this type for proper operation. All log files in the /var/log/ directory matching the mysql.* wildcard must be labeled with this type. mysqld_var_run_t This type is used by files in the /var/run/mariadb/ directory, specifically the process id (PID) named /var/run/mariadb/mariadb.pid , which is created by the mysqld daemon when it runs. This type is also used for related socket files such as /var/lib/mysql/mysql.sock . Files such as these must be labeled correctly for proper operation as a confined service.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-managing_confined_services-mariadb-types
Chapter 41. Installation and Booting
Chapter 41. Installation and Booting Multi-threaded xz compression in rpm-build Compression can take long time for highly parallel builds as it currently uses only one core. This is problematic especially for continuous integration of large projects that are built on hardware with many cores. This feature, which is provided as a Technology Preview, enables multi-threaded xz compression for source and binary packages when setting the %_source_payload or %_binary_payload macros to the wLTX.xzdio pattern . In it, L represents the compression level, which is 6 by default, and X is the number of threads to be used (may be multiple digits), for example w6T12.xzdio . This can be done by editing the /usr/lib/rpm/macros file or by declaring the macro within the spec file or at the command line. (BZ#1278924)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/technology_previews_installation_and_booting
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component.
null
https://docs.redhat.com/en/documentation/red_hat_amq_core_protocol_jms/7.11/html/using_amq_core_protocol_jms/using_your_subscription
probe::nfs.proc.remove
probe::nfs.proc.remove Name probe::nfs.proc.remove - NFS client removes a file on server Synopsis nfs.proc.remove Values prot transfer protocol version NFS version (the function is used for all NFS version) server_ip IP address of server filelen length of file name filename file name fh file handle of parent dir
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-proc-remove
Chapter 2. Deploy using dynamic storage devices
Chapter 2. Deploy using dynamic storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Red Hat Virtualization gives you the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.13 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Note Use of Vault namespaces are not supported with the Kubernetes authentication method in OpenShift Data Foundation 4.11. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4. Creating an OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites The OpenShift Data Foundation operator must be installed from the Operator Hub. For more information, see Installing OpenShift Data Foundation Operator Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying OpenShift Data Foundation deployment . Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n openshift-storage create serviceaccount <serviceaccount_name>", "oc -n openshift-storage create serviceaccount odf-vault-auth", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_red_hat_virtualization_platform/deploy-using-dynamic-storage-devices-rhv
Configuring and managing logical volumes
Configuring and managing logical volumes Red Hat Enterprise Linux 8 Configuring and managing LVM Red Hat Customer Content Services
[ "lsblk", "pvcreate /dev/sdb", "pvs PV VG Fmt Attr PSize PFree /dev/sdb lvm2 a-- 28.87g 13.87g", "pvs PV VG Fmt Attr PSize PFree /dev/sdb1 lvm2 --- 28.87g 28.87g", "pvremove /dev/sdb1", "vgreduce VolumeGroupName /dev/sdb1", "vgremove VolumeGroupName", "pvs", "pvs", "vgcreate VolumeGroupName PhysicalVolumeName1 PhysicalVolumeName2", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName 1 0 0 wz--n- 28.87g 28.87g", "vgs", "vgrename OldVolumeGroupName NewVolumeGroupName", "vgs VG #PV #LV #SN Attr VSize VFree NewVolumeGroupName 1 0 0 wz--n- 28.87g 28.87g", "vgs", "pvs", "vgextend VolumeGroupName PhysicalVolumeName", "pvs PV VG Fmt Attr PSize PFree /dev/sda VolumeGroupName lvm2 a-- 28.87g 28.87g /dev/sdd VolumeGroupName lvm2 a-- 1.88g 1.88g", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName1 1 0 0 wz--n- 28.87g 28.87g VolumeGroupName2 1 0 0 wz--n- 1.88g 1.88g", "vgmerge VolumeGroupName2 VolumeGroupName1", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName1 2 0 0 wz--n- 30.75g 30.75g", "pvmove /dev/vdb3 /dev/vdb3 : Moved: 2.0% /dev/vdb3 : Moved: 79.2% /dev/vdb3 : Moved: 100.0%", "pvcreate /dev/vdb4 Physical volume \" /dev/vdb4 \" successfully created", "vgextend VolumeGroupName /dev/vdb4 Volume group \" VolumeGroupName \" successfully extended", "pvmove /dev/vdb3 /dev/vdb4 /dev/vdb3 : Moved: 33.33% /dev/vdb3 : Moved: 100.00%", "vgreduce VolumeGroupName /dev/vdb3 Removed \" /dev/vdb3 \" from volume group \" VolumeGroupName \"", "pvs PV VG Fmt Attr PSize PFree Used /dev/vdb1 VolumeGroupName lvm2 a-- 1020.00m 0 1020.00m /dev/vdb2 VolumeGroupName lvm2 a-- 1020.00m 0 1020.00m /dev/vdb3 lvm2 a-- 1020.00m 1008.00m 12.00m", "vgsplit VolumeGroupName1 VolumeGroupName2 /dev/vdb3 Volume group \" VolumeGroupName2 \" successfully split from \" VolumeGroupName1 \"", "lvchange -a n /dev/VolumeGroupName1/LogicalVolumeName", "vgs VG #PV #LV #SN Attr VSize VFree VolumeGroupName1 2 1 0 wz--n- 34.30G 10.80G VolumeGroupName2 1 0 0 wz--n- 17.15G 17.15G", "pvs PV VG Fmt Attr PSize PFree Used /dev/vdb1 VolumeGroupName1 lvm2 a-- 1020.00m 0 1020.00m /dev/vdb2 VolumeGroupName1 lvm2 a-- 1020.00m 0 1020.00m /dev/vdb3 VolumeGroupName2 lvm2 a-- 1020.00m 1008.00m 12.00m", "umount /dev/mnt/ LogicalVolumeName", "vgchange -an VolumeGroupName vgchange -- volume group \"VolumeGroupName\" successfully deactivated", "vgexport VolumeGroupName vgexport -- volume group \"VolumeGroupName\" successfully exported", "pvscan PV /dev/sda1 is in exported VG VolumeGroupName [17.15 GB / 7.15 GB free] PV /dev/sdc1 is in exported VG VolumeGroupName [17.15 GB / 15.15 GB free] PV /dev/sdd1 is in exported VG VolumeGroupName [17.15 GB / 15.15 GB free]", "vgimport VolumeGroupName", "vgchange -ay VolumeGroupName", "mkdir -p /mnt/ VolumeGroupName /users mount /dev/ VolumeGroupName /users /mnt/ VolumeGroupName /users", "vgs -o vg_name,lv_count VolumeGroupName VG #LV VolumeGroupName 0", "vgremove VolumeGroupName", "vgs -o vg_name,lv_count VolumeGroupName VG #LV VolumeGroupName 0", "vgchange --lockstop VolumeGroupName", "vgremove VolumeGroupName", "vgs -o vg_name,vg_size VG VSize VolumeGroupName 30.75g", "lvcreate --name LogicalVolumeName --size VolumeSize VolumeGroupName", "lvs -o lv_name,seg_type LV Type LogicalVolumeName linear", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Create logical volume ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - sda - sdb - sdc volumes: - name: mylv size: 2G fs_type: ext4 mount_point: /mnt/data", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'lvs myvg'", "vgs -o vg_name,vg_size VG VSize VolumeGroupName 30.75g", "lvcreate --stripes NumberOfStripes --stripesize StripeSize --size LogicalVolumeSize --name LogicalVolumeName VolumeGroupName", "lvs -o lv_name,seg_type LV Type LogicalVolumeName striped", "vgs -o vg_name,vg_size VG VSize VolumeGroupName 30.75g", "lvcreate --type raid level --stripes NumberOfStripes --stripesize StripeSize --size Size --name LogicalVolumeName VolumeGroupName", "lvcreate --type raid1 --mirrors MirrorsNumber --size Size --name LogicalVolumeName VolumeGroupName", "lvcreate --type raid10 --mirrors MirrorsNumber --stripes NumberOfStripes --stripesize StripeSize --size Size --name LogicalVolumeName VolumeGroupName", "lvs -o lv_name,seg_type LV Type LogicalVolumeName raid0", "vgs -o vg_name,vg_size VG VSize VolumeGroupName 30.75g", "lvcreate --type thin-pool --size PoolSize --name ThinPoolName VolumeGroupName", "lvcreate --type thin --virtualsize MaxVolumeSize --name ThinVolumeName --thinpool ThinPoolName VolumeGroupName", "lvs -o lv_name,seg_type LV Type ThinPoolName thin-pool ThinVolumeName thin", "lvs -o lv_name,lv_size,vg_name,vg_size,vg_free LV LSize VG VSize VFree LogicalVolumeName 1.49g VolumeGroupName 30.75g 29.11g", "lvextend --size + AdditionalSize --resizefs VolumeGroupName / LogicalVolumeName", "lvs -o lv_name,lv_size LV LSize NewLogicalVolumeName 6.49g", "lvs -o lv_name,lv_size,data_percent LV LSize Data% MyThinPool 20.10g 3.21 ThinVolumeName 1.10g 4.88", "lvextend --size + AdditionalSize --resizefs VolumeGroupName / ThinVolumeName", "lvs -o lv_name,lv_size,data_percent LV LSize Data% MyThinPool 20.10g 3.21 ThinVolumeName 6.10g 0.43", "lvs -o lv_name,seg_type,data_percent,metadata_percent LV Type Data% Meta% ThinPoolName thin-pool 97.66 26.86 ThinVolumeName thin 48.80", "lvextend -L Size VolumeGroupName/ThinPoolName", "lvs -o lv_name,seg_type,data_percent,metadata_percent LV Type Data% Meta% ThinPoolName thin-pool 24.41 16.93 ThinVolumeName thin 24.41", "lvs -o lv_name,seg_type,data_percent LV Type Data% ThinPoolName thin-pool 93.87", "lvextend -L Size VolumeGroupName/ThinPoolName _tdata", "lvs -o lv_name,seg_type,data_percent LV Type Data% ThinPoolName thin-pool 40.23", "lvs -o lv_name,seg_type,metadata_percent LV Type Meta% ThinPoolName thin-pool 75.00", "lvextend -L Size VolumeGroupName/ThinPoolName _tmeta", "lvs -o lv_name,seg_type,metadata_percent LV Type Meta% ThinPoolName thin-pool 0.19", "lvs -o lv_name,vg_name,seg_monitor LV VG Monitor ThinPoolName VolumeGroupName not monitored", "lvchange --monitor y VolumeGroupName/ThinPoolName", "thin_pool_autoextend_threshold = 70 thin_pool_autoextend_percent = 20", "systemctl restart lvm2-monitor", "lvs -o lv_name,vg_name,lv_size LV VG LSize LogicalVolumeName VolumeGroupName 6.49g", "findmnt -o SOURCE,TARGET /dev/VolumeGroupName/LogicalVolumeName SOURCE TARGET /dev/mapper/VolumeGroupName-NewLogicalVolumeName /MountPoint", "umount /MountPoint", "e2fsck -f /dev/VolumeGroupName/LogicalVolumeName", "lvreduce --size TargetSize --resizefs VolumeGroupName/LogicalVolumeName", "mount -o remount /MountPoint", "df -hT /MountPoint/ Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/VolumeGroupName-NewLogicalVolumeName ext4 2.9G 139K 2.7G 1% /MountPoint", "lvs -o lv_name,lv_size LV LSize NewLogicalVolumeName 4.00g", "lvs -o lv_name,vg_name LV VG LogicalVolumeName VolumeGroupName", "lvrename VolumeGroupName/LogicalVolumeName VolumeGroupName/NewLogicalVolumeName", "lvs -o lv_name LV NewLogicalVolumeName", "lvs -o lv_name,lv_path LV Path LogicalVolumeName /dev/VolumeGroupName/LogicalVolumeName", "findmnt -o SOURCE,TARGET /dev/VolumeGroupName/LogicalVolumeName SOURCE TARGET /dev/mapper/VolumeGroupName-LogicalVolumeName /MountPoint", "umount /MountPoint", "lvremove VolumeGroupName/LogicalVolumeName", "lvs -o lv_name,vg_name,lv_path LV VG Path LogicalVolumeName VolumeGroupName VolumeGroupName/LogicalVolumeName", "lvchange --activate y VolumeGroupName / LogicalVolumeName", "lvdisplay VolumeGroupName / LogicalVolumeName LV Status available", "lvs -o lv_name,vg_name,lv_path LV VG Path LogicalVolumeName VolumeGroupName /dev/VolumeGroupName/LogicalVolumeName", "findmnt -o SOURCE,TARGET /dev/VolumeGroupName/LogicalVolumeName SOURCE TARGET /dev/mapper/VolumeGroupName-LogicalVolumeName /MountPoint", "umount /MountPoint", "lvchange --activate n VolumeGroupName / LogicalVolumeName", "lvdisplay VolumeGroupName/LogicalVolumeName LV Status NOT available", "lvs -o vg_name,lv_name,lv_size VG LV LSize VolumeGroupName LogicalVolumeName 10.00g", "lvcreate --snapshot --size SnapshotSize --name SnapshotName VolumeGroupName / LogicalVolumeName", "lvs -o lv_name,origin LV Origin LogicalVolumeName SnapshotName LogicalVolumeName", "lvs -o vg_name,lv_name,origin,data_percent,lv_size VG LV Origin Data% LSize VolumeGroupName LogicalVolumeName 10.00g VolumeGroupName SnapshotName LogicalVolumeName 82.00 5.00g", "lvextend --size + AdditionalSize VolumeGroupName / SnapshotName", "lvs -o vg_name,lv_name,origin,data_percent,lv_size VG LV Origin Data% LSize VolumeGroupName LogicalVolumeName 10.00g VolumeGroupName SnapshotName LogicalVolumeName 68.33 6.00g", "snapshot_autoextend_threshold = 70 snapshot_autoextend_percent = 20", "systemctl restart lvm2-monitor", "lvs -o lv_name,vg_name,lv_path LV VG Path LogicalVolumeName VolumeGroupName /dev/VolumeGroupName/LogicalVolumeName SnapshotName VolumeGroupName /dev/VolumeGroupName/SnapshotName", "findmnt -o SOURCE,TARGET /dev/ VolumeGroupName/LogicalVolumeName findmnt -o SOURCE,TARGET /dev/ VolumeGroupName/SnapshotName", "umount /LogicalVolume/MountPoint umount /Snapshot/MountPoint", "lvchange --activate n VolumeGroupName / LogicalVolumeName lvchange --activate n VolumeGroupName / SnapshotName", "lvconvert --merge SnapshotName", "lvchange --activate y VolumeGroupName / LogicalVolumeName", "umount /LogicalVolume/MountPoint", "lvs -o lv_name", "lvs -o lv_name,vg_name,pool_lv,lv_size LV VG Pool LSize PoolName VolumeGroupName 152.00m ThinVolumeName VolumeGroupName PoolName 100.00m", "lvcreate --snapshot --name SnapshotName VolumeGroupName / ThinVolumeName", "lvs -o lv_name,origin LV Origin PoolName SnapshotName ThinVolumeName ThinVolumeName", "lvs -o lv_name,vg_name,lv_path LV VG Path ThinPoolName VolumeGroupName ThinSnapshotName VolumeGroupName /dev/VolumeGroupName/ThinSnapshotName ThinVolumeName VolumeGroupName /dev/VolumeGroupName/ThinVolumeName", "findmnt -o SOURCE,TARGET /dev/ VolumeGroupName/ThinVolumeName", "umount /ThinLogicalVolume/MountPoint", "lvchange --activate n VolumeGroupName / ThinLogicalVolumeName", "lvconvert --mergethin VolumeGroupName/ThinSnapshotName", "umount /ThinLogicalVolume/MountPoint", "lvs -o lv_name", "lvs -o lv_name,vg_name LV VG LogicalVolumeName VolumeGroupName", "lvcreate --type cache-pool --name CachePoolName --size Size VolumeGroupName /FastDevicePath", "lvconvert --type cache --cachepool VolumeGroupName / CachePoolName VolumeGroupName / LogicalVolumeName", "lvs -o lv_name,pool_lv LV Pool LogicalVolumeName [CachePoolName_cpool]", "lvs -o lv_name,vg_name LV VG LogicalVolumeName VolumeGroupName", "lvcreate --name CacheVolumeName --size Size VolumeGroupName /FastDevicePath", "lvconvert --type writecache --cachevol CacheVolumeName VolumeGroupName/LogicalVolumeName", "lvs -o lv_name,pool_lv LV Pool LogicalVolumeName [CacheVolumeName_cvol]", "lvs -o lv_name,pool_lv,vg_name LV Pool VG LogicalVolumeName [CacheVolumeName_cvol] VolumeGroupName", "lvconvert --splitcache VolumeGroupName/LogicalVolumeName", "lvconvert --uncache VolumeGroupName/LogicalVolumeName", "lvs -o lv_name,pool_lv", "vgs -o vg_name VG VolumeGroupName", "lsblk", "lvcreate --name ThinPoolDataName --size Size VolumeGroupName /DevicePath", "lvcreate --name ThinPoolMetadataName --size Size VolumeGroupName /DevicePath", "lvconvert --type thin-pool --poolmetadata ThinPoolMetadataName VolumeGroupName/ThinPoolDataName", "lvs -o lv_name,seg_type LV Type ThinPoolDataName thin-pool", "lvcreate -s rhel/root -kn -n root_snapshot_before_changes Logical volume \"root_snapshot_before_changes\" created.", "lvcreate -s rhel/root -n root_snapshot_before_changes -L 25g Logical volume \"root_snapshot_before_changes\" created.", "grub2-mkconfig > /boot/grub2/grub.cfg Generating grub configuration file Found linux image: /boot/vmlinuz-3.10.0-1160.118.1.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-1160.118.1.el7.x86_64.img Found linux image: /boot/vmlinuz-3.10.0-1160.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-1160.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-f9f6209866c743739757658d1a4850b2 Found initrd image: /boot/initramfs-0-rescue-f9f6209866c743739757658d1a4850b2.img done", "boom profile create --from-host --uname-pattern el7 Created profile with os_id f150f3d: OS ID: \"f150f3d6693495254255d46e20ecf5c690ec3262\", Name: \"Red Hat Enterprise Linux Server\", Short name: \"rhel\", Version: \"7.9 (Maipo)\", Version ID: \"7.9\", Kernel pattern: \"/vmlinuz-%{version}\", Initramfs pattern: \"/initramfs-%{version}.img\", Root options (LVM2): \"rd.lvm.lv=%{lvm_root_lv}\", Root options (BTRFS): \"rootflags=%{btrfs_subvolume}\", Options: \"root=%{root_device} ro %{root_opts}\", Title: \"%{os_name} %{os_version_id} (%{version})\", Optional keys: \"grub_users grub_arg grub_class id\", UTS release pattern: \"el7\"", "boom create --backup --title \"Root LV snapshot before changes\" --rootlv rhel/ root_snapshot_before_changes Created entry with boot_id bfef767: title Root LV snapshot before changes machine-id 7d70d7fcc6884be19987956d0897da31 version 3.10.0-1160.114.2.el7.x86_64 linux /vmlinuz-3.10.0-1160.114.2.el7.x86_64.boom0 initrd /initramfs-3.10.0-1160.114.2.el7.x86_64.img.boom0 options root=/dev/rhel/root_snapshot_before_changes ro rd.lvm.lv=rhel/root_snapshot_before_changes grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "leapp upgrade ==> Processing phase `configuration_phase` ====> * ipu_workflow_config IPU workflow config actor ==> Processing phase `FactsCollection` ============================================================ REPORT OVERVIEW ============================================================ Upgrade has been inhibited due to the following problems: 1. Btrfs has been removed from RHEL8 2. Missing required answers in the answer file HIGH and MEDIUM severity reports: 1. Packages available in excluded repositories will not be installed 2. GRUB core will be automatically updated during the upgrade 3. Difference in Python versions and support in RHEL 8 4. chrony using default configuration Reports summary: Errors: 0 Inhibitors: 2 HIGH severity reports: 3 MEDIUM severity reports: 1 LOW severity reports: 3 INFO severity reports: 4 Before continuing consult the full report: A report has been generated at /var/log/leapp/leapp-report.json A report has been generated at /var/log/leapp/leapp-report.txt ============================================================ END OF REPORT OVERVIEW ============================================================", "leapp upgrade --reboot ==> Processing phase `configuration_phase` ====> * ipu_workflow_config IPU workflow config actor ==> Processing phase `FactsCollection`", "reboot", "cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-3.10.0-1160.118.1.el7.x86_64.boom0 root=/dev/rhel/root_snapshot_before_changes ro rd.lvm.lv=rhel/root_snapshot_before_changes", "boom list WARNING - Options for BootEntry(boot_id=cae29bf) do not match OsProfile: marking read-only BootID Version Name RootDevice e0252ad 3.10.0-1160.118.1.el7.x86_64 Red Hat Enterprise Linux Server /dev/rhel/root_snapshot_before_changes 611ad14 3.10.0-1160.118.1.el7.x86_64 Red Hat Enterprise Linux Server /dev/mapper/rhel-root 3bfed71- 3.10.0-1160.el7.x86_64 Red Hat Enterprise Linux Server /dev/mapper/rhel-root _cae29bf 4.18.0-513.24.1.el8_9.x86_64 Red Hat Enterprise Linux /dev/mapper/rhel-root", "boom delete --boot-id e0252ad Deleted 1 entry", "lvremove rhel/ root_snapshot_before_changes Do you really want to remove active logical volume rhel/root_snapshot_before_changes ? [y/n]: y Logical volume \" root_snapshot_before_changes \" successfully removed", "lvconvert --merge rhel/ root_snapshot_before_changes Logical volume rhel/root_snapshot_before_changes contains a filesystem in use. Delaying merge since snapshot is open. Merging of thin snapshot rhel/root_snapshot_before_changes will occur on next activation of rhel/root.", "boom create --backup --title \"RHEL Rollback\" --rootlv rhel/root Created entry with boot_id 1e6d298 : title RHEL Rollback machine-id f9f6209866c743739757658d1a4850b2 version 3.10.0-1160.118.1.el7.x86_64 linux /vmlinuz-3.10.0-1160.118.1.el7.x86_64.boom0 initrd /initramfs-3.10.0-1160.118.1.el7.x86_64.img.boom0 options root=/dev/rhel/root ro rd.lvm.lv=rhel/root grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "reboot", "rm -f /boot/loader/entries/*.el8*", "rm -f /boot/*.el8*", "grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file Found linux image: /boot/vmlinuz-3.10.0-1160.118.1.el7.x86_64.boom0 . done", "new-kernel-pkg --update USD(uname -r)", "boom list -o+title BootID Version Name RootDevice Title a49fb09 3.10.0-1160.118.1.el7.x86_64 Red Hat Enterprise Linux Server /dev/mapper/rhel-root Red Hat Enterprise Linux (3.10.0-1160.118.1.el7.x86_64) 8.9 (Ootpa) 1bb11e4 3.10.0-1160.el7.x86_64 Red Hat Enterprise Linux Server /dev/mapper/rhel-root Red Hat Enterprise Linux (3.10.0-1160.el7.x86_64) 8.9 (Ootpa) e0252ad 3.10.0-1160.118.1.el7.x86_64 Red Hat Enterprise Linux Server /dev/rhel/root_snapshot_before_changes Root LV snapshot before changes 1e6d298 3.10.0-1160.118.1.el7.x86_64 Red Hat Enterprise Linux Server /dev/rhel/root RHEL Rollback", "boom delete e0252ad Deleted 1 entry boom delete 1e6d298 Deleted 1 entry", "pvs PV VG Fmt Attr PSize PFree /dev/vdb1 VolumeGroupName lvm2 a-- 17.14G 17.14G /dev/vdb2 VolumeGroupName lvm2 a-- 17.14G 17.09G /dev/vdb3 VolumeGroupName lvm2 a-- 17.14G 17.14G", "pvs -o pv_name,pv_size,pv_free PV PSize PFree /dev/vdb1 17.14G 17.14G /dev/vdb2 17.14G 17.09G /dev/vdb3 17.14G 17.14G", "pvs -o pv_name,pv_size,pv_free -O pv_free PV PSize PFree /dev/vdb2 17.14G 17.09G /dev/vdb1 17.14G 17.14G /dev/vdb3 17.14G 17.14G", "pvs -o pv_name,pv_size,pv_free -O -pv_free PV PSize PFree /dev/vdb1 17.14G 17.14G /dev/vdb3 17.14G 17.14G /dev/vdb2 17.14G 17.09G", "vgs myvg VG #PV #LV #SN Attr VSize VFree myvg 1 1 0 wz-n <931.00g <930.00g", "pvs --units g /dev/vdb PV VG Fmt Attr PSize PFree /dev/vdb myvg lvm2 a-- 931.00g 930.00g", "pvs --units G /dev/vdb PV VG Fmt Attr PSize PFree /dev/vdb myvg lvm2 a-- 999.65G 998.58G", "pvs --units s PV VG Fmt Attr PSize PFree /dev/vdb myvg lvm2 a-- 1952440320S 1950343168S", "pvs --units 4m PV VG Fmt Attr PSize PFree /dev/vdb myvg lvm2 a-- 238335.00U 238079.00U", "lvs_cols=\"lv_name,vg_name,lv_attr\"", "compact_output = 1", "units = \"G\"", "report { }", "lvmconfig --typeconfig diff", "pvs -S name=~nvme PV Fmt Attr PSize PFree /dev/nvme2n1 lvm2 --- 1.00g 1.00g", "pvs -S vg_name=myvg PV VG Fmt Attr PSize PFree /dev/vdb1 myvg lvm2 a-- 1020.00m 396.00m /dev/vdb2 myvg lvm2 a-- 1020.00m 896.00m", "lvs -S 'size > 100m && size < 200m' LV VG Attr LSize Cpy%Sync rr myvg rwi-a-r--- 120.00m 100.00", "lvs -S name=~lvol[02] LV VG Attr LSize lvol0 myvg -wi-a----- 100.00m lvol2 myvg -wi------- 100.00m", "lvs -S segtype=raid1 LV VG Attr LSize Cpy%Sync rr myvg rwi-a-r--- 120.00m 100.00", "lvchange --addtag mytag -S active=1 Logical volume myvg/mylv changed. Logical volume myvg/lvol0 changed. Logical volume myvg/lvol1 changed. Logical volume myvg/rr changed.", "lvs -a -o lv_name,vg_name,attr,size,pool_lv,origin,role -S 'name!~_pmspare' LV VG Attr LSize Pool Origin Role thin1 example Vwi-a-tz-- 2.00g tp public,origin,thinorigin thin1s example Vwi---tz-- 2.00g tp thin1 public,snapshot,thinsnapshot thin2 example Vwi-a-tz-- 3.00g tp public tp example twi-aotz-- 1.00g private [tp_tdata] example Twi-ao---- 1.00g private,thin,pool,data [tp_tmeta] example ewi-ao---- 4.00m private,thin,pool,metadata", "lvchange --setactivationskip n -S 'role=thinsnapshot && origin=thin1' Logical volume myvg/thin1s changed.", "lvs -a -S 'name=~_tmeta && role=metadata && size <= 4m' LV VG Attr LSize [tp_tmeta] myvg ewi-ao---- 4.00m", "filter = [ \"r|^ path_to_device USD|\" ]", "system_id_source = \"uname\"", "vgchange --systemid <VM_system_id> <VM_vg_name>", "filter = [ \"r|^ path_to_device USD|\" ]", "system_id_source = \"uname\"", "vgchange --systemid <system_id> <vg_name>", "filter = [ \"a|^ path_to_device USD|\" ]", "system_id_source = \"uname\"", "filter = [\"a|^ path_to_device USD|\" ]", "use_lvmlockd=1", "--- - name: Manage local storage hosts: managed-node-01.example.com become: true tasks: - name: Create shared LVM device ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_pools: - name: vg1 disks: /dev/vdb type: lvm shared: true state: present volumes: - name: lv1 size: 4g mount_point: /opt/test1 storage_safe_mode: false storage_use_partitions: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "lvcreate --type raid1 -m 1 -L 1G -n my_lv my_vg Logical volume \" my_lv \" created.", "lvcreate --type raid5 -i 3 -L 1G -n my_lv my_vg", "lvcreate --type raid6 -i 3 -L 1G -n my_lv my_vg", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure LVM pool with RAID ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] raid_level: raid1 volumes: - name: my_volume size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs state: present", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'lsblk'", "lvcreate --type raid0 -L 2G --stripes 3 --stripesize 4 -n mylv my_vg Rounding size 2.00 GiB (512 extents) up to stripe boundary size 2.00 GiB(513 extents). Logical volume \" mylv \" created.", "mkfs.ext4 /dev/my_vg/mylv", "mount /dev/my_vg/mylv /mnt df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/my_vg-mylv 2002684 6168 1875072 1% /mnt", "lvs -a -o +devices,segtype my_vg LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Type mylv my_vg rwi-a-r--- 2.00g mylv_rimage_0(0),mylv_rimage_1(0),mylv_rimage_2(0) raid0 [mylv_rimage_0] my_vg iwi-aor--- 684.00m /dev/sdf1(0) linear [mylv_rimage_1] my_vg iwi-aor--- 684.00m /dev/sdg1(0) linear [mylv_rimage_2] my_vg iwi-aor--- 684.00m /dev/sdh1(0) linear", "--- - name: Manage local storage hosts: managed-node-01.example.com tasks: - name: Configure stripe size for RAID LVM volumes ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] volumes: - name: my_volume size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs raid_level: raid0 raid_stripe_size: \"256 KiB\" state: present", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'lvs -o+stripesize /dev/my_pool/my_volume'", "lvcreate --type raid1 --raidintegrity y -L 256M -n test-lv my_vg Creating integrity metadata LV test-lv_rimage_0_imeta with size 8.00 MiB. Logical volume \" test-lv_rimage_0_imeta \" created. Creating integrity metadata LV test-lv_rimage_1_imeta with size 8.00 MiB. Logical volume \" test-lv_rimage_1_imeta \" created. Logical volume \" test-lv \" created.", "lvconvert --raidintegrity y my_vg/test-lv", "lvconvert --raidintegrity n my_vg/test-lv Logical volume my_vg/test-lv has removed integrity.", "lvs -a my_vg LV VG Attr LSize Origin Cpy%Sync test-lv my_vg rwi-a-r--- 256.00m 2.10 [test-lv_rimage_0] my_vg gwi-aor--- 256.00m [test-lv_rimage_0_iorig] 93.75 [test-lv_rimage_0_imeta] my_vg ewi-ao---- 8.00m [test-lv_rimage_0_iorig] my_vg -wi-ao---- 256.00m [test-lv_rimage_1] my_vg gwi-aor--- 256.00m [test-lv_rimage_1_iorig] 85.94 [...]", "lvs -a my-vg -o+segtype LV VG Attr LSize Origin Cpy%Sync Type test-lv my_vg rwi-a-r--- 256.00m 87.96 raid1 [test-lv_rimage_0] my_vg gwi-aor--- 256.00m [test-lv_rimage_0_iorig] 100.00 integrity [test-lv_rimage_0_imeta] my_vg ewi-ao---- 8.00m linear [test-lv_rimage_0_iorig] my_vg -wi-ao---- 256.00m linear [test-lv_rimage_1] my_vg gwi-aor--- 256.00m [test-lv_rimage_1_iorig] 100.00 integrity [...]", "lvs -o+integritymismatches my_vg/test-lv_rimage_0 LV VG Attr LSize Origin Cpy%Sync IntegMismatches [test-lv_rimage_0] my_vg gwi-aor--- 256.00m [test-lv_rimage_0_iorig] 100.00 0", "lvcreate --type raid5 -i 3 -L 500M -n my_lv my_vg Using default stripesize 64.00 KiB. Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents). Logical volume \"my_lv\" created.", "lvs -a -o +devices,segtype LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Type my_lv my_vg rwi-a-r--- 504.00m 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0),my_lv_rimage_3(0) raid5 [my_lv_rimage_0] my_vg iwi-aor--- 168.00m /dev/sda(1) linear", "lvconvert --type raid6 my_vg/my_lv Using default stripesize 64.00 KiB. Replaced LV type raid6 (same as raid6_zr) with possible type raid6_ls_6. Repeat this command to convert to raid6 after an interim conversion has finished. Are you sure you want to convert raid5 LV my_vg/my_lv to raid6_ls_6 type? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvconvert --type raid6 my_vg/my_lv", "lvs -a -o +devices,segtype LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Type my_lv my_vg rwi-a-r--- 504.00m 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0),my_lv_rimage_3(0),my_lv_rimage_4(0) raid6 [my_lv_rimage_0] my_vg iwi-aor--- 172.00m /dev/sda(1) linear", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(0)", "lvconvert --type raid1 -m 1 my_vg/my_lv Are you sure you want to convert linear LV my_vg/my_lv to raid1 with 2 images enhancing resilience? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m0 my_vg/my_lv Are you sure you want to convert raid1 LV my_vg/my_lv to type linear losing all resilience? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvconvert -m0 my_vg/my_lv /dev/sde1", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sdf1(1)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 15.20 my_lv_mimage_0(0),my_lv_mimage_1(0) [my_lv_mimage_0] /dev/sde1(0) [my_lv_mimage_1] /dev/sdf1(0) [my_lv_mlog] /dev/sdd1(0)", "lvconvert --type raid1 my_vg/my_lv Are you sure you want to convert mirror LV my_vg/my_lv to raid1 type? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(0) [my_lv_rmeta_0] /dev/sde1(125) [my_lv_rmeta_1] /dev/sdf1(125)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m 2 my_vg/my_lv Are you sure you want to convert raid1 LV my_vg/my_lv to 3 images enhancing resilience? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvconvert -m 2 my_vg/my_lv /dev/sdd1", "lvconvert -m1 my_vg/my_lv Are you sure you want to convert raid1 LV my_vg/my_lv to 2 images reducing resilience? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvconvert -m1 my_vg/my_lv /dev/sde1", "lvs -a -o name,copy_percent,devices my_vg LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdd1(1) [my_lv_rimage_1] /dev/sde1(1) [my_lv_rimage_2] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sdd1(0) [my_lv_rmeta_1] /dev/sde1(0) [my_lv_rmeta_2] /dev/sdf1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 12.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert --splitmirror 1 -n new my_vg/my_lv Are you sure you want to split raid1 LV my_vg/my_lv losing all resilience? [y/n]: y", "lvconvert --splitmirror 1 -n new my_vg/my_lv", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(1) new /dev/sdf1(1)", "lvcreate --type raid1 -m 2 -L 1G -n my_lv my_vg Logical volume \" my_lv \" created", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_2 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_2' to merge back into my_lv", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0)", "lvconvert --merge my_vg/my_lv_rimage_1 my_vg/my_lv_rimage_1 successfully merged back into my_vg/my_lv", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvs --all --options name,copy_percent,devices my_vg /dev/sdb: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] [unknown](1) [my_lv_rimage_1] /dev/sdc1(1) [...]", "vi /etc/lvm/lvm.conf raid_fault_policy = \"allocate\"", "lvs -a -o name,copy_percent,devices my_vg Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy. LV Copy% Devices lv 100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0) [lv_rimage_0] /dev/sdh1(1) [lv_rimage_1] /dev/sdc1(1) [lv_rimage_2] /dev/sdd1(1) [lv_rmeta_0] /dev/sdh1(0) [lv_rmeta_1] /dev/sdc1(0) [lv_rmeta_2] /dev/sdd1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "vi /etc/lvm/lvm.conf # This configuration option has an automatic default value. raid_fault_policy = \"warn\"", "grep lvm /var/log/messages Apr 14 18:48:59 virt-506 kernel: sd 25:0:0:0: rejecting I/O to offline device Apr 14 18:48:59 virt-506 kernel: I/O error, dev sdb, sector 8200 op 0x1:(WRITE) flags 0x20800 phys_seg 0 prio class 2 [...] Apr 14 18:48:59 virt-506 dmeventd[91060]: WARNING: VG my_vg is missing PV 9R2TVV-bwfn-Bdyj-Gucu-1p4F-qJ2Q-82kCAF (last written to /dev/sdb). Apr 14 18:48:59 virt-506 dmeventd[91060]: WARNING: Couldn't find device with uuid 9R2TVV-bwfn-Bdyj-Gucu-1p4F-qJ2Q-82kCAF. Apr 14 18:48:59 virt-506 dmeventd[91060]: Use 'lvconvert --repair my_vg/ly_lv' to replace failed device.", "lvcreate --type raid1 -m 2 -L 1G -n my_lv my_vg Logical volume \"my_lv\" created", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdb2(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdb2(0) [my_lv_rmeta_2] /dev/sdc1(0)", "lvconvert --replace /dev/sdb2 my_vg/my_lv", "lvconvert --replace /dev/sdb1 my_vg/my_lv /dev/sdd1", "lvconvert --replace /dev/sdb1 --replace /dev/sdc1 my_vg/my_lv", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 37.50 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc2(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc2(0) [my_lv_rmeta_2] /dev/sdc1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 28.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdd1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 60.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rimage_2] /dev/sde1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdd1(0) [my_lv_rmeta_2] /dev/sde1(0)", "lvs --all --options name,copy_percent,devices my_vg LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvs --all --options name,copy_percent,devices my_vg /dev/sdc: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] [unknown](1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] [unknown](0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvconvert --repair my_vg/my_lv /dev/sdc: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. Attempt to replace failed RAID images (requires full device resync)? [y/n]: y Faulty devices in my_vg/my_lv successfully replaced.", "lvconvert --repair my_vg/my_lv replacement_pv", "lvs --all --options name,copy_percent,devices my_vg /dev/sdc: open failed: No such device or address /dev/sdc1: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. LV Cpy%Sync Devices my_lv 43.79 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "vgreduce --removemissing my_vg", "pvscan PV /dev/sde1 VG rhel_virt-506 lvm2 [<7.00 GiB / 0 free] PV /dev/sdb1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free] PV /dev/sdd1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free] PV /dev/sdd1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free]", "lvs --all --options name,copy_percent,devices my_vg my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvchange --maxrecoveryrate 4K my_vg/my_lv Logical volume _my_vg/my_lv_changed.", "lvchange --syncaction repair my_vg/my_lv", "lvchange --syncaction check my_vg/my_lv", "lvchange --syncaction repair my_vg/my_lv", "lvs -o +raid_sync_action,raid_mismatch_count my_vg/my_lv LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert SyncAction Mismatches my_lv my_vg rwi-a-r--- 500.00m 100.00 idle 0", "lvcreate --type raid5 -i 2 -L 500M -n my_lv my_vg Using default stripesize 64.00 KiB. Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB (126 extents). Logical volume \"my_lv\" created.", "lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices my_lv my_vg rwi-a-r--- 504.00m 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] my_vg iwi-aor--- 252.00m /dev/sda(1) [my_lv_rimage_1] my_vg iwi-aor--- 252.00m /dev/sdb(1) [my_lv_rimage_2] my_vg iwi-aor--- 252.00m /dev/sdc(1) [my_lv_rmeta_0] my_vg ewi-aor--- 4.00m /dev/sda(0) [my_lv_rmeta_1] my_vg ewi-aor--- 4.00m /dev/sdb(0) [my_lv_rmeta_2] my_vg ewi-aor--- 4.00m /dev/sdc(0)", "lvs -o stripes my_vg/my_lv #Str 3", "lvs -o stripesize my_vg/my_lv Stripe 64.00k", "lvconvert --stripes 3 my_vg/my_lv Using default stripesize 64.00 KiB. WARNING: Adding stripes to active logical volume my_vg/my_lv will grow it from 126 to 189 extents! Run \"lvresize -l126 my_vg/my_lv\" to shrink it or use the additional capacity. Are you sure you want to add 1 images to raid5 LV my_vg/my_lv? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvconvert --stripesize 128k my_vg/my_lv Converting stripesize 64.00 KiB of raid5 LV my_vg/my_lv to 128.00 KiB. Are you sure you want to convert raid5 LV my_vg/my_lv? [y/n]: y Logical volume my_vg/my_lv successfully converted.", "lvchange --maxrecoveryrate 4M my_vg/my_lv Logical volume my_vg/my_lv changed.", "lvchange --minrecoveryrate 1M my_vg/my_lv Logical volume my_vg/my_lv changed.", "lvchange --syncaction check my_vg/my_lv", "lvchange --writemostly /dev/sdb my_vg/my_lv Logical volume my_vg/my_lv changed.", "lvchange --writebehind 100 my_vg/my_lv Logical volume my_vg/my_lv changed.", "lvs -o stripes my_vg/my_lv #Str 4", "lvs -o stripesize my_vg/my_lv Stripe 128.00k", "lvs -a -o +raid_max_recovery_rate LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert MaxSync my_lv my_vg rwi-a-r--- 10.00g 100.00 4096 [my_lv_rimage_0] my_vg iwi-aor--- 10.00g [...]", "lvs -a -o +raid_min_recovery_rate LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert MinSync my_lv my_vg rwi-a-r--- 10.00g 100.00 1024 [my_lv_rimage_0] my_vg iwi-aor--- 10.00g [...]", "lvs -a LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert my_lv my_vg rwi-a-r--- 10.00g 2.66 [my_lv_rimage_0] my_vg iwi-aor--- 10.00g [...]", "lvcreate --type raid1 -m 1 -L 10G test Logical volume \"lvol0\" created.", "lvs -a -o +devices,region_size LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Region lvol0 test rwi-a-r--- 10.00g 100.00 lvol0_rimage_0(0),lvol0_rimage_1(0) 2.00m [lvol0_rimage_0] test iwi-aor--- 10.00g /dev/sde1(1) 0 [lvol0_rimage_1] test iwi-aor--- 10.00g /dev/sdf1(1) 0 [lvol0_rmeta_0] test ewi-aor--- 4.00m /dev/sde1(0) 0 [lvol0_rmeta_1] test ewi-aor--- 4.00m", "cat /etc/lvm/lvm.conf | grep raid_region_size Configuration option activation/raid_region_size. # raid_region_size = 2048", "lvconvert -R 4096K my_vg/my_lv Do you really want to change the region_size 512.00 KiB of LV my_vg/my_lv to 4.00 MiB? [y/n]: y Changed region size on RAID LV my_vg/my_lv to 4.00 MiB.", "lvchange --resync my_vg/my_lv Do you really want to deactivate logical volume my_vg/my_lv to resync it? [y/n]: y", "lvs -a -o +devices,region_size LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices Region lvol0 test rwi-a-r--- 10.00g 6.25 lvol0_rimage_0(0),lvol0_rimage_1(0) 4.00m [lvol0_rimage_0] test iwi-aor--- 10.00g /dev/sde1(1) 0 [lvol0_rimage_1] test iwi-aor--- 10.00g /dev/sdf1(1) 0 [lvol0_rmeta_0] test ewi-aor--- 4.00m /dev/sde1(0) 0", "cat /etc/lvm/lvm.conf | grep raid_region_size Configuration option activation/raid_region_size. # raid_region_size = 4096", "filter = [ \"a|.*|\" ]", "filter = [ \"r|^/dev/cdromUSD|\" ]", "filter = [ \"a|loop|\", \"r|.*|\" ]", "filter = [ \"a|loop|\", \"a|/dev/sd.*|\", \"r|.*|\" ]", "filter = [ \"a|^/dev/sda8USD|\", \"r|.*|\" ]", "filter = [ \"a|/dev/disk/by-id/<disk-id>.|\", \"a|/dev/mapper/mpath.|\", \"r|.*|\" ]", "lvs --config 'devices{ filter = [ \"a|/dev/emcpower. |\", \"r| .|\" ] }'", "filter = [ \"a|/dev/emcpower.*|\", \"r|*.|\" ]", "dracut --force --verbose", "vgcreate <vg_name> <PV>", "lvcreate -n <lv_name> -L <lv_size> <vg_name> [ <PV> ... ]", "lvcreate -n lv1 -L1G vg /dev/sda", "lvcreate -n lv2 L1G vg /dev/sda /dev/sdb", "lvcreate -n lv3 -L1G vg", "lvcreate --type <segment_type> -m <mirror_images> -n <lv_name> -L <lv_size> <vg_name> [ <PV> ... ]", "lvcreate --type raid1 -m 1 -n lv4 -L1G vg /dev/sda /dev/sdb", "lvcreate --type raid1 -m 2 -n lv5 -L1G vg /dev/sda /dev/sdb /dev/sdc", "pvchange -x n /dev/sdk1", "lvs @database", "lvm tags", "pvchange --addtag <@tag> <PV>", "vgchange --addtag <@tag> <VG>", "vgcreate --addtag <@tag> <VG>", "lvchange --addtag <@tag> <LV>", "lvcreate --addtag <@tag>", "pvchange --deltag @tag PV", "vgchange --deltag @tag VG", "lvchange --deltag @tag LV", "tags { tag1 { } tag2 { host_list = [\"host1\"] } }", "activation { volume_list = [\"vg1/lvol0\", \"@database\" ] }", "tags { hosttags = 1 }", "lvmdump", "lvs -v", "pvs --all", "dmsetup info --columns", "lvmconfig", "vgs --options +devices /dev/vdb1: open failed: No such device or address /dev/vdb1: open failed: No such device or address WARNING: Couldn't find device with uuid 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s. WARNING: VG myvg is missing PV 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s (last written to /dev/sdb1). WARNING: Couldn't find all devices for LV myvg/mylv while checking used and assumed devices. VG #PV #LV #SN Attr VSize VFree Devices myvg 2 2 0 wz-pn- <3.64t <3.60t [unknown](0) myvg 2 2 0 wz-pn- <3.64t <3.60t [unknown](5120),/dev/vdb1(0)", "lvs --all --options +devices /dev/vdb1: open failed: No such device or address /dev/vdb1: open failed: No such device or address WARNING: Couldn't find device with uuid 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s. WARNING: VG myvg is missing PV 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s (last written to /dev/sdb1). WARNING: Couldn't find all devices for LV myvg/mylv while checking used and assumed devices. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices mylv myvg -wi-a---p- 20.00g [unknown](0) [unknown](5120),/dev/sdc1(0)", "pvs Error reading device /dev/sdc1 at 0 length 4. Error reading device /dev/sdc1 at 4096 length 4. Couldn't find device with uuid b2J8oD-vdjw-tGCA-ema3-iXob-Jc6M-TC07Rn. WARNING: Couldn't find all devices for LV myvg/my_raid1_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV myvg/my_raid1_rmeta_1 while checking used and assumed devices. PV VG Fmt Attr PSize PFree /dev/sda2 rhel_bp-01 lvm2 a-- <464.76g 4.00m /dev/sdb1 myvg lvm2 a-- <836.69g 736.68g /dev/sdd1 myvg lvm2 a-- <836.69g <836.69g /dev/sde1 myvg lvm2 a-- <836.69g <836.69g [unknown] myvg lvm2 a-m <836.69g 736.68g", "lvs -a --options name,vgname,attr,size,devices myvg Couldn't find device with uuid b2J8oD-vdjw-tGCA-ema3-iXob-Jc6M-TC07Rn. WARNING: Couldn't find all devices for LV myvg/my_raid1_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV myvg/my_raid1_rmeta_1 while checking used and assumed devices. LV VG Attr LSize Devices my_raid1 myvg rwi-a-r-p- 100.00g my_raid1_rimage_0(0),my_raid1_rimage_1(0) [my_raid1_rimage_0] myvg iwi-aor--- 100.00g /dev/sdb1(1) [my_raid1_rimage_1] myvg Iwi-aor-p- 100.00g [unknown](1) [my_raid1_rmeta_0] myvg ewi-aor--- 4.00m /dev/sdb1(0) [my_raid1_rmeta_1] myvg ewi-aor-p- 4.00m [unknown](0)", "vgchange --activate y --partial myvg", "vgreduce --removemissing --test myvg", "vgreduce --removemissing --force myvg", "vgcfgrestore myvg", "cat /etc/lvm/archive/ myvg_00000-1248998876 .vg", "lvs --all --options +devices Couldn't find device with uuid ' FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk '.", "vgchange --activate n --partial myvg PARTIAL MODE. Incomplete logical volumes will be processed. WARNING: Couldn't find device with uuid 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s . WARNING: VG myvg is missing PV 42B7bu-YCMp-CEVD-CmKH-2rk6-fiO9-z1lf4s (last written to /dev/vdb1 ). 0 logical volume(s) in volume group \" myvg \" now active", "pvcreate --uuid physical-volume-uuid \\ --restorefile /etc/lvm/archive/ volume-group-name_backup-number .vg \\ block-device", "pvcreate --uuid \"FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk\" \\ --restorefile /etc/lvm/archive/VG_00050.vg \\ /dev/vdb1 Physical volume \"/dev/vdb1\" successfully created", "vgcfgrestore myvg Restored volume group myvg", "lvs --all --options +devices myvg", "LV VG Attr LSize Origin Snap% Move Log Copy% Devices mylv myvg -wi--- 300.00G /dev/vdb1 (0),/dev/vdb1(0) mylv myvg -wi--- 300.00G /dev/vdb1 (34728),/dev/vdb1(0)", "lvchange --resync myvg/mylv", "lvchange --activate y myvg/mylv", "lvs --all --options +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices mylv myvg -wi--- 300.00G /dev/vdb1 (0),/dev/vdb1(0) mylv myvg -wi--- 300.00G /dev/vdb1 (34728),/dev/vdb1(0)", "Insufficient free extents", "vgdisplay myvg", "--- Volume group --- VG Name myvg System ID Format lvm2 Metadata Areas 4 Metadata Sequence No 6 VG Access read/write [...] Free PE / Size 8780 / 34.30 GB", "lvcreate --extents 8780 --name mylv myvg", "lvcreate --extents 100%FREE --name mylv myvg", "vgs --options +vg_free_count,vg_extent_count VG #PV #LV #SN Attr VSize VFree Free #Ext myvg 2 1 0 wz--n- 34.30G 0 0 8780", "pvck --dump metadata <disk>", "pvck --dump metadata /dev/sdb metadata text at 172032 crc Oxc627522f # vgname test segno 59 --- <raw metadata from disk> ---", "pvck --dump metadata_all <disk>", "pvck --dump metadata_all /dev/sdb metadata at 4608 length 815 crc 29fcd7ab vg test seqno 1 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 5632 length 1144 crc 50ea61c3 vg test seqno 2 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 7168 length 1450 crc 5652ea55 vg test seqno 3 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv", "pvck --dump metadata_search <disk>", "pvck --dump metadata_search /dev/sdb Searching for metadata at offset 4096 size 1044480 metadata at 4608 length 815 crc 29fcd7ab vg test seqno 1 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 5632 length 1144 crc 50ea61c3 vg test seqno 2 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv metadata at 7168 length 1450 crc 5652ea55 vg test seqno 3 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv", "pvck --dump metadata -v <disk>", "pvck --dump metadata -v /dev/sdb metadata text at 199680 crc 0x628cf243 # vgname my_vg seqno 40 --- my_vg { id = \"dmEbPi-gsgx-VbvS-Uaia-HczM-iu32-Rb7iOf\" seqno = 40 format = \"lvm2\" status = [\"RESIZEABLE\", \"READ\", \"WRITE\"] flags = [] extent_size = 8192 max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = \"8gn0is-Hj8p-njgs-NM19-wuL9-mcB3-kUDiOQ\" device = \"/dev/sda\" device_id_type = \"sys_wwid\" device_id = \"naa.6001405e635dbaab125476d88030a196\" status = [\"ALLOCATABLE\"] flags = [] dev_size = 125829120 pe_start = 8192 pe_count = 15359 } pv1 { id = \"E9qChJ-5ElL-HVEp-rc7d-U5Fg-fHxL-2QLyID\" device = \"/dev/sdb\" device_id_type = \"sys_wwid\" device_id = \"naa.6001405f3f9396fddcd4012a50029a90\" status = [\"ALLOCATABLE\"] flags = [] dev_size = 125829120 pe_start = 8192 pe_count = 15359 }", "pvck --dump metadata_search --settings metadata_offset=5632 -f meta.txt /dev/sdb Searching for metadata at offset 4096 size 1044480 metadata at 5632 length 1144 crc 50ea61c3 vg test seqno 2 id FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv head -2 meta.txt test { id = \"FaCsSz-1ZZn-mTO4-Xl4i-zb6G-BYat-u53Fxv\"", "pvcreate --restorefile <metadata-file> --uuid <UUID> <disk>", "pvck --dump headers <disk>", "vgcfgrestore --file <metadata-file> <vg-name>", "pvck --dump metadata <disk>", "vgs", "pvck --repair -f <metadata-file> <disk>", "vgs <vgname>", "pvs <pvname>", "lvs <lvname>", "lvchange --maxrecoveryrate 4K my_vg/my_lv Logical volume _my_vg/my_lv_changed.", "lvchange --syncaction repair my_vg/my_lv", "lvchange --syncaction check my_vg/my_lv", "lvchange --syncaction repair my_vg/my_lv", "lvs -o +raid_sync_action,raid_mismatch_count my_vg/my_lv LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert SyncAction Mismatches my_lv my_vg rwi-a-r--- 500.00m 100.00 idle 0", "lvs --all --options name,copy_percent,devices my_vg LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvs --all --options name,copy_percent,devices my_vg /dev/sdc: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. LV Cpy%Sync Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] [unknown](1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] [unknown](0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvconvert --repair my_vg/my_lv /dev/sdc: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. WARNING: Couldn't find all devices for LV my_vg/my_lv_rimage_1 while checking used and assumed devices. WARNING: Couldn't find all devices for LV my_vg/my_lv_rmeta_1 while checking used and assumed devices. Attempt to replace failed RAID images (requires full device resync)? [y/n]: y Faulty devices in my_vg/my_lv successfully replaced.", "lvconvert --repair my_vg/my_lv replacement_pv", "lvs --all --options name,copy_percent,devices my_vg /dev/sdc: open failed: No such device or address /dev/sdc1: open failed: No such device or address Couldn't find device with uuid A4kRl2-vIzA-uyCb-cci7-bOod-H5tX-IzH4Ee. LV Cpy%Sync Devices my_lv 43.79 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "vgreduce --removemissing my_vg", "pvscan PV /dev/sde1 VG rhel_virt-506 lvm2 [<7.00 GiB / 0 free] PV /dev/sdb1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free] PV /dev/sdd1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free] PV /dev/sdd1 VG my_vg lvm2 [<60.00 GiB / 59.50 GiB free]", "lvs --all --options name,copy_percent,devices my_vg my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/dm-5 not /dev/sdd Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowerb not /dev/sde Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sddlmab not /dev/sdf", "Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/sdd not /dev/sdf", "Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/mapper/mpatha not /dev/mapper/mpathc Found duplicate PV GDjTZf7Y03GJHjteqOwrye2dcSCjdaUi: using /dev/emcpowera not /dev/emcpowerh", "filter = [ \"a|/dev/sda2USD|\", \"a|/dev/mapper/mpath.*|\", \"r|.*|\" ]", "filter = [ \"a|/dev/cciss/.*|\", \"a|/dev/emcpower.*|\", \"r|.*|\" ]", "filter = [ \"a|/dev/hda.*|\", \"a|/dev/mapper/mpath.*|\", \"r|.*|\" ]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/configuring_and_managing_logical_volumes/index
5.10. Determining Device Mapper Entries with the dmsetup Command
5.10. Determining Device Mapper Entries with the dmsetup Command You can use the dmsetup command to find out which device mapper entries match the multipathed devices. The following command displays all the device mapper devices and their major and minor numbers. The minor numbers determine the name of the dm device. For example, a minor number of 3 corresponds to the multipathed device /dev/dm-3 .
[ "dmsetup ls mpathd (253:4) mpathep1 (253:12) mpathfp1 (253:11) mpathb (253:3) mpathgp1 (253:14) mpathhp1 (253:13) mpatha (253:2) mpathh (253:9) mpathg (253:8) VolGroup00-LogVol01 (253:1) mpathf (253:7) VolGroup00-LogVol00 (253:0) mpathe (253:6) mpathbp1 (253:10) mpathd (253:5)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/dmsetup_queries
Chapter 1. Overview of authentication and authorization
Chapter 1. Overview of authentication and authorization 1.1. Glossary of common terms for OpenShift Container Platform authentication and authorization This glossary defines common terms that are used in OpenShift Container Platform authentication and authorization. authentication An authentication determines access to an OpenShift Container Platform cluster and ensures only authenticated users access the OpenShift Container Platform cluster. authorization Authorization determines whether the identified user has permissions to perform the requested action. bearer token Bearer token is used to authenticate to API with the header Authorization: Bearer <token> . Cloud Credential Operator The Cloud Credential Operator (CCO) manages cloud provider credentials as custom resource definitions (CRDs). config map A config map provides a way to inject configuration data into the pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. containers Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers in a data center, public or private cloud, or your local host. Custom Resource (CR) A CR is an extension of the Kubernetes API. group A group is a set of users. A group is useful for granting permissions to multiple users one time. HTPasswd HTPasswd updates the files that store usernames and password for authentication of HTTP users. Keystone Keystone is an Red Hat OpenStack Platform (RHOSP) project that provides identity, token, catalog, and policy services. Lightweight directory access protocol (LDAP) LDAP is a protocol that queries user information. manual mode In manual mode, a user manages cloud credentials instead of the Cloud Credential Operator (CCO). mint mode Mint mode is the default and recommended best practice setting for the Cloud Credential Operator (CCO) to use on the platforms for which it is supported. In this mode, the CCO uses the provided administrator-level cloud credential to create new credentials for components in the cluster with only the specific permissions that are required. namespace A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. node A node is a worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OAuth client OAuth client is used to get a bearer token. OAuth server The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. OpenID Connect The OpenID Connect is a protocol to authenticate the users to use single sign-on (SSO) to access sites that use OpenID Providers. passthrough mode In passthrough mode, the Cloud Credential Operator (CCO) passes the provided cloud credential to the components that request cloud credentials. pod A pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. regular users Users that are created automatically in the cluster upon first login or via the API. request header A request header is an HTTP header that is used to provide information about HTTP request context, so that the server can track the response of the request. role-based access control (RBAC) A key security control to ensure that cluster users and workloads have access to only the resources required to execute their roles. service accounts Service accounts are used by the cluster components or applications. system users Users that are created automatically when the cluster is installed. users Users is an entity that can make requests to API. 1.2. About authentication in OpenShift Container Platform To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, users must first authenticate to the OpenShift Container Platform API in some way. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. Note If you do not present a valid access token or certificate, your request is unauthenticated and you receive an HTTP 401 error. An administrator can configure authentication through the following tasks: Configuring an identity provider: You can define any supported identity provider in OpenShift Container Platform and add it to your cluster. Configuring the internal OAuth server : The OpenShift Container Platform control plane includes a built-in OAuth server that determines the user's identity from the configured identity provider and creates an access token. You can configure the token duration and inactivity timeout, and customize the internal OAuth server URL. Note Users can view and manage OAuth tokens owned by them . Registering an OAuth client: OpenShift Container Platform includes several default OAuth clients . You can register and configure additional OAuth clients . Note When users send a request for an OAuth token, they must specify either a default or custom OAuth client that receives and uses the token. Managing cloud provider credentials using the Cloud Credentials Operator : Cluster components use cloud provider credentials to get permissions required to perform cluster-related tasks. Impersonating a system admin user: You can grant cluster administrator permissions to a user by impersonating a system admin user . 1.3. About authorization in OpenShift Container Platform Authorization involves determining whether the identified user has permissions to perform the requested action. Administrators can define permissions and assign them to users using the RBAC objects, such as rules, roles, and bindings . To understand how authorization works in OpenShift Container Platform, see Evaluating authorization . You can also control access to an OpenShift Container Platform cluster through projects and namespaces . Along with controlling user access to a cluster, you can also control the actions a pod can perform and the resources it can access using security context constraints (SCCs) . You can manage authorization for OpenShift Container Platform through the following tasks: Viewing local and cluster roles and bindings. Creating a local role and assigning it to a user or group. Creating a cluster role and assigning it to a user or group: OpenShift Container Platform includes a set of default cluster roles . You can create additional cluster roles and add them to a user or group . Creating a cluster-admin user: By default, your cluster has only one cluster administrator called kubeadmin . You can create another cluster administrator . Before creating a cluster administrator, ensure that you have configured an identity provider. Note After creating the cluster admin user, delete the existing kubeadmin user to improve cluster security. Creating service accounts: Service accounts provide a flexible way to control API access without sharing a regular user's credentials. A user can create and use a service account in applications and also as an OAuth client . Scoping tokens : A scoped token is a token that identifies as a specific user who can perform only specific operations. You can create scoped tokens to delegate some of your permissions to another user or a service account. Syncing LDAP groups: You can manage user groups in one place by syncing the groups stored in an LDAP server with the OpenShift Container Platform user groups.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/authentication_and_authorization/overview-of-authentication-authorization
probe::socket.receive
probe::socket.receive Name probe::socket.receive - Message received on a socket. Synopsis Values success Was send successful? (1 = yes, 0 = no) protocol Protocol value flags Socket flags value name Name of this probe state Socket state value size Size of message received (in bytes) or error code if success = 0 type Socket type value family Protocol family value Context The message receiver
[ "socket.receive" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-socket-receive
Chapter 5. Capability Trimming in JBoss EAP for OpenShift
Chapter 5. Capability Trimming in JBoss EAP for OpenShift When building an image that includes JBoss EAP, you can control the JBoss EAP features and subsystems to include in the image. The default JBoss EAP server included in S2I images includes the complete server and all features. You might want to trim the capabilities included in the provisioned server. For example, you might want to reduce the security exposure of the provisioned server, or you might want to reduce the memory footprint so it is more appropriate for a microservice container. 5.1. Provision a custom JBoss EAP server To provision a custom server with trimmed capabilities, pass the GALLEON_PROVISION_LAYERS environment variable during the S2I build phase. The value of the environment variable is a comma-separated list of the layers to provision to build the server. For example, if you specify the environment variable as GALLEON_PROVISION_LAYERS=jaxrs-server,sso , a JBoss EAP server is provisioned with the following capabilities: A servlet container The ability to configure a datasource The jaxrs , weld , and jpa subsystems Red Hat SSO integration 5.2. Available JBoss EAP Layers Red Hat makes available six layers to customize provisioning of the JBoss EAP server in OpenShift. Three layers are base layers that provide core functionality. Three are decorator layers that enhance the base layers. The following Jakarta EE specifications are not supported in any provisioning layer: Jakarta Server Faces 2.3 Jakarta Enterprise Beans 3.2 Jakarta XML Web Services 2.3 5.2.1. Base Layers Each base layer includes core functionality for a typical server user case. datasources-web-server This layer includes a servlet container and the ability to configure a datasource. The following are the JBoss EAP subsystems included by default in the datasources-web-server : core-management datasources deployment-scanner ee elytron io jca jmx logging naming request-controller security-manager transactions undertow The following Jakarta EE specifications are supported in this layer: Jakarta JSON Processing 1.1 Jakarta JSON Binding 1.0 Jakarta Servlet 4.0 Jakarta Expression Language 3.0 Jakarta Server Pages 2.3 Jakarta Standard Tag Library 1.2 Jakarta Concurrency 1.1 Jakarta Annotations 1.3 Jakarta XML Binding 2.3 Jakarta Debugging Support for Other Languages 1.0 Jakarta Transactions 1.3 Jakarta Connectors 1.7 jaxrs-server This layer enhances the datasources-web-server layer with the following JBoss EAP subsystems: jaxrs weld jpa This layer also adds Infinispan-based second-level entity caching locally in the container. The following Jakarta EE specifications are supported in this layer in addition to those supported in the datasources-web-server layer: Jakarta Contexts and Dependency Injection 2.0 Jakarta Bean Validation 2.0 Jakarta Interceptors 1.2 Jakarta RESTful Web Services 2.1 Jakarta Persistence 2.2 cloud-server This layer enhances the jaxrs-server layer with the following JBoss EAP subsystems: resource-adapters messaging-activemq (remote broker messaging, not embedded messaging) This layer also adds the following observability features to the jaxrs-server layer: Health subsystem Metrics subsystem The following Jakarta EE specification is supported in this layer in addition to those supported in the jaxrs-server layer: Jakarta Security 1.0 5.2.2. Decorator Layers Decorator layers are not used alone. You can configure one or more decorator layers with a base layer to deliver additional functionality. sso This decorator layer adds Red Hat Single Sign-On integration to the provisioned server. observability This decorator layer adds the following observability features to the provisioned server: Health subsystem Metrics subsystem Note This layer is built in to the cloud-server layer. You do not need to add this layer to the cloud-server layer. web-clustering This layer adds embedded Infinispan-based web session clustering to the provisioned server. 5.3. Provisioning User-developed Layers in JBoss EAP In addition to provisioning layers available from Red Hat, you can provision custom layers you develop. Procedure Build a custom layer using the Galleon Maven plugin. For more information, see Preparing the Maven project . Deploy the custom layer to an accessible Maven repository. You can use custom Galleon feature-pack environment variables to customize Galleon feature-packs and layers during the S2I image build process. For more information about customizing Galleon feature-packs and layers, see Using the custom Galleon feature-pack during S2I build . Optional: Create a custom provisioning file to reference the user-defined layer and supported JBoss EAP layers and store it in your application directory. For more information about creating a custom provisioning file, see Custom provisioning files for JBoss EAP . Run the S2I process to provision a JBoss EAP server in OpenShift. For more information, see Using the custom Galleon feature-pack during S2I build . 5.3.1. Building and using custom Galleon layers for JBoss EAP Custom Galleon layers are packaged inside a Galleon feature-pack that is designed to run with JBoss EAP 7.4. In Openshift, you can build and use a Galleon feature-pack that contains layers to provision, for example, a MariaDB driver and data source for the JBoss EAP 7.4 server. A layer contains the content that is installed in the server. A layer can update the server XML configuration file and add content to the server installation. This section documents how to build and use in OpenShift a Galleon feature-pack containing layers to provision a MariaDB driver and data source for the JBoss EAP 7.4 server. 5.3.1.1. Preparing the Maven project Galleon feature-packs are created using Maven. This procedure includes the steps to create a new Maven project. Procedure To create a new Maven project, run the following command: In the directory mariadb-galleon-pack , update the pom.xml file to include the Red Hat Maven repository: Update the pom.xml file to add dependencies on the EAP Galleon feature-pack and the MariaDB driver: Update the pom.xml file to include the Maven plugin that is used to build the Galleon feature-pack: 5.3.1.2. Adding the feature pack content This procedure helps you add layers to a custom Galleon feature-pack, for example, the feature-pack including the MariaDB driver and datasource layers. Prerequisites You have created a Maven project. For more details, see Preparing the Maven project . Procedure Create the directory, src/main/resources , within a custom feature-pack Maven project, for example, see Preparing the Maven project . This directory is the root directory containing the feature-pack content. Create the directory src/main/resources/modules/org/mariadb/jdbc/main . In the main directory, create a file named module.xml with the following content: <?xml version="1.0" encoding="UTF-8"?> <module name="org.mariadb.jdbc" xmlns="urn:jboss:module:1.8"> <resources> <artifact name="USD{org.mariadb.jdbc:mariadb-java-client}"/> 1 </resources> <dependencies> 2 <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> 1 The MariaDB driver groupId and artifactId . At provisioning time, the actual driver jar file gets installed. The version of the driver is referenced from the pom.xml file. 2 The JBoss Modules modules dependencies for the MariaDB driver. Create the directory src/main/resources/layers/standalone/ . This is the root directory of all the layers that the Galleon feature-pack is defining. Create the directory src/main/resources/layers/standalone/mariadb-driver . In the mariadb-driver directory, create the layer-spec.xml file with the following content: <?xml version="1.0" ?> <layer-spec xmlns="urn:jboss:galleon:layer-spec:1.0" name="mariadb-driver"> <feature spec="subsystem.datasources"> 1 <feature spec="subsystem.datasources.jdbc-driver"> <param name="driver-name" value="mariadb"/> <param name="jdbc-driver" value="mariadb"/> <param name="driver-xa-datasource-class-name" value="org.mariadb.jdbc.MariaDbDataSource"/> <param name="driver-module-name" value="org.mariadb.jdbc"/> </feature> </feature> <packages> 2 <package name="org.mariadb.jdbc"/> </packages> </layer-spec> 1 Update the datasources subsytem configuration with a JDBC-driver named MariaDB, implemented by the module org.mariadb.jdbc . 2 The JBoss Modules module containing the driver classes that are installed when the layer is provisioned. The mariadb-driver layer updates the datasources subsystem with the configuration of a JDBC driver, implemented by the JBoss Modules module. Create the directory src/main/resources/layers/standalone/mariadb-datasource . In the mariadb-datasource directory, create the layer-spec.xml file with the following content: <?xml version="1.0" ?> <layer-spec xmlns="urn:jboss:galleon:layer-spec:1.0" name="mariadb-datasource"> <dependencies> <layer name="mariadb-driver"/> 1 </dependencies> <feature spec="subsystem.datasources.data-source"> 2 <param name="data-source" value="MariaDBDS"/> <param name="jndi-name" value="java:jboss/datasources/USD{env.MARIADB_DATASOURCE:MariaDBDS}"/> <param name="connection-url" value="jdbc:mariadb://USD{env.MARIADB_HOST:localhost}:USD{env.MARIADB_PORT:3306}/USD{env.MARIADB_DATABASE}"/> 3 <param name="driver-name" value="mariadb"/> <param name="user-name" value="USD{env.MARIADB_USER}"/> 4 <param name="password" value="USD{env.MARIADB_PASSWORD}"/> </feature> </layer-spec> 1 This dependency enforces the provisioning of the MariaDB driver when the datasource is provisioned. All the layers a layer depends on are automatically provisioned when that layer is provisioned. 2 Update the datasources subsystem configuration with a datasource named MariaDBDS. 3 Datasource's name, host, port, and database values are resolved from the environment variables MARIADB_DATASOURCE , MARIADB_HOST , MARIADB_PORT , and MARIADB_DATABASE , which are set when the server is started. 4 User name and password values are resolved from the environment variables MARIADB_USER and MARIADB_PASSWORD . Build the Galleon feature-pack by running the following command: The file target/mariadb-galleon-pack-1.0-SNAPSHOT.zip is created. 5.3.1.3. Using the custom Galleon feature-pack during S2I build A custom feature-pack must be made available to the Maven build that occurs during OpenShift S2I build. This is usually achieved by deploying the custom feature-pack as an artifact, for example, org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT to an accessible Maven repository. In order to test the feature-pack before deployment, you can use the EAP S2I builder image capability that allows you to make use of a locally built Galleon feature-pack. Use the following procedure example to customize the todo-backend EAP quickstart with the use of MariaDB driver instead of PostgreSQL driver. Note For more information about the todo-backend EAP quickstart, see EAP quickstart . For more information about configuring the JBoss EAP S2I image for custom Galleon feature-pack usage, see Configure Galleon by using advanced environment variables . Prerequisites You have OpenShift command-line installed You are logged in to an OpenShift cluster You have installed the JBoss EAP OpenShift images in your cluster You have configured access to the Red Hat Container registry. For detailed information, see Red Hat Container Registry . You have created a custom Galleon feature-pack. For detailed information, see Preparing the Maven project . Procedure Start the MariaDB database by running the following command: The OpenShift service mariadb-101-rhel7 is created and started. Create a secret from the feature-pack ZIP archive, generated by the custom feature-pack Maven build, by running the following command within the Maven project directory mariadb-galleon-pack : The secret mariadb-galleon-pack is created. When initiating the S2I build, this secret is used to mount the feature-pack zip file in the pod, making the file available during the server provisioning phase. To create a new OpenShift build to build an application image containing the todo-backend quickstart deployment running inside a server trimmed with Galleon, run the following command: 1 The custom feature-pack environment variable that contains a comma separated list of feature-pack Maven coordinates, such as groupId:artifactId:version . 2 The set of Galleon layers that are used to provision the server. jaxrs-server is a base server layer and mariadb-datasource is the custom layer that brings the MariaDB driver and a new datasource to the server installation. 3 The location of the local Maven repository within the image that contains the MariaDB feature-pack. This repository is populated when mounting the secret inside the image. 4 The mariadb-galleon-pack secret is mounted in the /tmp/repo/org/example/mariadb/mariadb-galleon-pack/1.0-SNAPSHOT directory. To start a new build from the created OpenShift build, run the following command: After successful command execution, the image todos-app-build is created. To create a new deployment, provide the environment variables that are required to bind the datasource to the running MariaDB database by executing the following command: 1 The quickstart expects the datasource to be named ToDos Note For more details about the custom Galleon feature-pack environment variables, see Custom Galleon feature-pack environment variables To expose the todos-app application, run the following command: To create a new task, run the following command: To access the list of tasks, run the following command: The added task is displayed in a browser. 5.3.1.4. Custom Provisioning Files for JBoss EAP Custom provisioning files are XML files with the file name provisioning.xml that are stored in the galleon subdirectory. Using the provisioning.xml file is an alternative to the usage of GALLEON_PROVISION_FEATURE_PACKS and GALLEON_PROVISION_LAYERS environment variables. During S2I build, the provisioning.xml file is used to provision the custom EAP server. Important Do not create a custom provisioning file when using the GALLEON_PROVISION_LAYERS environment variable, because this environment variable configures the S2I build process to ignore the file. The following code illustrates a custom provisioning file. <?xml version="1.0" ?> <installation xmlns="urn:jboss:galleon:provisioning:3.0"> <feature-pack location="eap-s2i@maven(org.jboss.universe:s2i-universe)"> 1 <default-configs inherit="false"/> 2 <packages inherit="false"/> 3 </feature-pack> <feature-pack location="org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT"> 4 <default-configs inherit="false"/> <packages inherit="false"/> </feature-pack> <config model="standalone" name="standalone.xml"> 5 <layers> <include name="jaxrs-server"/> <include name="mariadb-datasource"/> </layers> </config> <options> 6 <option name="optional-packages" value="passive+"/> </options> </installation> 1 This element instructs the provisioning process to provision the current eap-s2i feature-pack. Note that a builder image includes only one feature pack. 2 This element instructs the provisioning process to exclude default configurations. 3 This element instructs the provisioning process to exclude default packages. 4 This element instructs the provisioning process to provision the org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT feature pack. The child elements instruct the process to exclude default configurations and default packages. 5 This element instructs the provisioning process to create a custom standalone configuration. The configuration includes the jaxrs-server base layer and the mariadb-datasource custom layer from the org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT feature pack. 6 This element instructs the provisioning process to optimize provisioning of JBoss EAP modules. Additional resources For more information about using the GALLEON_PROVISION_LAYERS environment variable, see Provision a Custom JBoss EAP server . 5.3.2. Configure Galleon by using advanced environment variables You can use advanced custom Galleon feature pack environment variables to customize the location where you store your custom Galleon feature packs and layers during the S2I image build process. These advanced custom Galleon feature pack environment variables are as follows: GALLEON_DIR=<path> , which overrides the default <project_root_dir>/galleon directory path to <project_root_dir>/<GALLEON_DIR> . GALLEON_CUSTOM_FEATURE_PACKS_MAVEN_REPO=<path> , which overrides the <project root dir>/galleon/repository directory path with an absolute path to a Maven local repository cache directory. This repository contains custom Galleon feature packs. You must locate the Galleon feature pack archive files inside a sub-directory that is compliant with the Maven local-cache file system configuration. For example, locate the org.examples:my-feature-pack:1.0.0.Final feature pack inside the path-to-repository/org/examples/my-feature-pack/1.0.0.Final/my-feature-pack-1.0.0.Final.zip path. You can configure your Maven project settings by creating a settings.xml file in the <project_root>/<GALLEON_DIR> directory. The default value for GALLEON_DIR is <project_root_dir>/galleon . Maven uses the file to provision your custom Galleon feature packs for your application. If you do not create a settings.xml file, Maven uses a default settings.xml file that was created by the S2I image. Important Do not specify a local Maven repository location in a settings.xml file, because the S2I builder image specifies a location to your local Maven repository. The S2I builder image uses this location during the S2I build process. Additional resources For more information about custom Galleon feature pack environment variables, see custom Galleon feature pack environment variables . 5.3.3. Custom Galleon feature pack environment variables You can use any of the following custom Galleon feature pack environment variables to customize how you use your JBoss EAP S2I image. Table 5.1. Descriptions of custom Galleon feature pack environment variables Environment variable Description GALLEON_DIR=<path> Where <path> is a directory relative to the root directory of your application project. Your <path> directory contains your optional Galleon custom content, such as the settings.xml file and local Maven repository cache. This cache contains the custom Galleon feature packs. Directory defaults to galleon . GALLEON_CUSTOM_FEATURE_PACKS_MAVEN_REPO=<path> <path> is the absolute path to a Maven local repository directory that contains custom feature packs. Directory defaults to galleon/repository . GALLEON_PROVISION_FEATURE_PACKS=<list_of_galleon_feature_packs> Where <list_of_galleon_feature_packs> is a comma-separated list of your custom Galleon feature packs identified by Maven coordinates. The listed feature packs must be compatible with the version of the JBoss EAP 7.4 server present in the builder image. You can use the GALLEON_PROVISION_LAYERS environment variable to set the Galleon layers, which were defined by your custom feature packs, for your server.
[ "mvn archetype:generate -DarchetypeGroupId=org.codehaus.mojo.archetypes -DarchetypeArtifactId=pom-root -DgroupId=org.example.mariadb -DartifactId=mariadb-galleon-pack -DinteractiveMode=false", "<repositories> <repository> <id>redhat-ga</id> <name>Redhat GA</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories>", "<dependencies> <dependency> <groupId>org.jboss.eap</groupId> <artifactId>wildfly-ee-galleon-pack</artifactId> <version>7.4.4.GA-redhat-00011</version> <type>zip</type> </dependency> <dependency> <groupId>org.mariadb.jdbc</groupId> <artifactId>mariadb-java-client</artifactId> <version>3.0.5</version> </dependency> </dependencies>", "<build> <plugins> <plugin> <groupId>org.wildfly.galleon-plugins</groupId> <artifactId>wildfly-galleon-maven-plugin</artifactId> <version>5.2.11.Final</version> <executions> <execution> <id>mariadb-galleon-pack-build</id> <goals> <goal>build-user-feature-pack</goal> </goals> <phase>compile</phase> </execution> </executions> </plugin> </plugins> </build>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <module name=\"org.mariadb.jdbc\" xmlns=\"urn:jboss:module:1.8\"> <resources> <artifact name=\"USD{org.mariadb.jdbc:mariadb-java-client}\"/> 1 </resources> <dependencies> 2 <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>", "<?xml version=\"1.0\" ?> <layer-spec xmlns=\"urn:jboss:galleon:layer-spec:1.0\" name=\"mariadb-driver\"> <feature spec=\"subsystem.datasources\"> 1 <feature spec=\"subsystem.datasources.jdbc-driver\"> <param name=\"driver-name\" value=\"mariadb\"/> <param name=\"jdbc-driver\" value=\"mariadb\"/> <param name=\"driver-xa-datasource-class-name\" value=\"org.mariadb.jdbc.MariaDbDataSource\"/> <param name=\"driver-module-name\" value=\"org.mariadb.jdbc\"/> </feature> </feature> <packages> 2 <package name=\"org.mariadb.jdbc\"/> </packages> </layer-spec>", "<?xml version=\"1.0\" ?> <layer-spec xmlns=\"urn:jboss:galleon:layer-spec:1.0\" name=\"mariadb-datasource\"> <dependencies> <layer name=\"mariadb-driver\"/> 1 </dependencies> <feature spec=\"subsystem.datasources.data-source\"> 2 <param name=\"data-source\" value=\"MariaDBDS\"/> <param name=\"jndi-name\" value=\"java:jboss/datasources/USD{env.MARIADB_DATASOURCE:MariaDBDS}\"/> <param name=\"connection-url\" value=\"jdbc:mariadb://USD{env.MARIADB_HOST:localhost}:USD{env.MARIADB_PORT:3306}/USD{env.MARIADB_DATABASE}\"/> 3 <param name=\"driver-name\" value=\"mariadb\"/> <param name=\"user-name\" value=\"USD{env.MARIADB_USER}\"/> 4 <param name=\"password\" value=\"USD{env.MARIADB_PASSWORD}\"/> </feature> </layer-spec>", "mvn clean install", "new-app -e MYSQL_USER=admin -e MYSQL_PASSWORD=admin -e MYSQL_DATABASE=mariadb registry.redhat.io/rhscl/mariadb-101-rhel7", "create secret generic mariadb-galleon-pack --from-file=target/mariadb-galleon-pack-1.0-SNAPSHOT.zip", "new-build jboss-eap74-openjdk11-openshift:latest~https://github.com/jboss-developer/jboss-eap-quickstarts#EAP_7.4.0.GA --context-dir=todo-backend --env=GALLEON_PROVISION_FEATURE_PACKS=\"org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT\" \\ 1 --env=GALLEON_PROVISION_LAYERS=\"jaxrs-server,mariadb-datasource\" \\ 2 --env=GALLEON_CUSTOM_FEATURE_PACKS_MAVEN_REPO=\"/tmp/repo\" \\ 3 --env=MAVEN_ARGS_APPEND=\"-Dcom.redhat.xpaas.repo.jbossorg\" --build-secret=mariadb-galleon-pack:/tmp/repo/org/example/mariadb/mariadb-galleon-pack/1.0-SNAPSHOT \\ 4 --name=todos-app-build", "start-build todos-app-build", "new-app --name=todos-app todos-app-build --env=MARIADB_PORT=3306 --env=MARIADB_USER=admin --env=MARIADB_PASSWORD=admin --env=MARIADB_HOST=mariadb-101-rhel7 --env=MARIADB_DATABASE=mariadb --env=MARIADB_DATASOURCE=ToDos 1", "expose svc/todos-app", "curl -X POST http://USD(oc get route todos-app --template='{{ .spec.host }}') -H 'Content-Type: application/json' -d '{\"title\":\"todo1\"}'", "curl http://USD(oc get route todos-app --template='{{ .spec.host }}')", "<?xml version=\"1.0\" ?> <installation xmlns=\"urn:jboss:galleon:provisioning:3.0\"> <feature-pack location=\"eap-s2i@maven(org.jboss.universe:s2i-universe)\"> 1 <default-configs inherit=\"false\"/> 2 <packages inherit=\"false\"/> 3 </feature-pack> <feature-pack location=\"org.example.mariadb:mariadb-galleon-pack:1.0-SNAPSHOT\"> 4 <default-configs inherit=\"false\"/> <packages inherit=\"false\"/> </feature-pack> <config model=\"standalone\" name=\"standalone.xml\"> 5 <layers> <include name=\"jaxrs-server\"/> <include name=\"mariadb-datasource\"/> </layers> </config> <options> 6 <option name=\"optional-packages\" value=\"passive+\"/> </options> </installation>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_container_platform/capability-trimming-eap-foropenshift_default
Config APIs
Config APIs OpenShift Container Platform 4.17 Reference guide for config APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/config_apis/index
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_selinux/proc_providing-feedback-on-red-hat-documentation_using-selinux
Chapter 5. View OpenShift Data Foundation Topology
Chapter 5. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/viewing-odf-topology_rhodf
Chapter 51. Infinispan Embedded
Chapter 51. Infinispan Embedded Since Camel 2.13 Both producer and consumer are supported This component allows you to interact with Infinispan distributed data grid / cache. Infinispan is an extremely scalable, highly available key / value data store and data grid platform written in Java. The camel-infinispan-embedded component includes the following features. Local Camel Consumer - Receives cache change notifications and sends them to be processed. This can be done synchronously or asynchronously, and is also supported with a replicated or distributed cache. Local Camel Producer - A producer creates and sends messages to an endpoint. The camel-infinispan producer uses GET , PUT , REMOVE , and CLEAR operations. The local producer is also supported with a replicated or distributed cache. The events are processed asynchronously. 51.1. Dependencies When using infinispan-embedded with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-infinispan-embedded-starter</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 51.2. URI format The producer allows sending messages to a local infinispan cache. The consumer allows listening for events from local infinispan cache. If no cache configuration is provided, embedded cacheContainer is created directly in the component. 51.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 51.3.1. Configuring Component Options At the component level, you set general and shared configurations that are, then, inherited by the endpoints. It is the highest configuration level. For example, a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre-configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. You can configure components using: the Component DSL . in a configuration file (application.properties, *.yaml files, etc). directly in the Java code. 51.3.2. Configuring Endpoint Options You usually spend more time setting up endpoints because they have many options. These options help you customize what you want the endpoint to do. The options are also categorized into whether the endpoint is used as a consumer (from), as a producer (to), or both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. A good practice when configuring options is to use Property Placeholders . Property placeholders provide a few benefits: They help prevent using hardcoded urls, port numbers, sensitive information, and other settings. They allow externalizing the configuration from the code. They help the code to become more flexible and reusable. The following two sections list all the options, firstly for the component followed by the endpoint. 51.4. Component Options The Infinispan Embedded component supports 20 options that are listed below. Name Description Default Type configuration (common) Component configuration. InfinispanEmbeddedConfiguration queryBuilder (common) Specifies the query builder. InfinispanQueryBuilder bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean clusteredListener (consumer) If true, the listener will be installed for the entire cluster. false boolean customListener (consumer) Returns the custom listener in use, if provided. InfinispanEmbeddedCustomListener eventTypes (consumer) Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CACHE_ENTRY_ACTIVATED, CACHE_ENTRY_PASSIVATED, CACHE_ENTRY_VISITED, CACHE_ENTRY_LOADED, CACHE_ENTRY_EVICTED, CACHE_ENTRY_CREATED, CACHE_ENTRY_REMOVED, CACHE_ENTRY_MODIFIED, TRANSACTION_COMPLETED, TRANSACTION_REGISTERED, CACHE_ENTRY_INVALIDATED, CACHE_ENTRY_EXPIRED, DATA_REHASHED, TOPOLOGY_CHANGED, PARTITION_STATUS_CHANGED, PERSISTENCE_AVAILABILITY_CHANGED. String sync (consumer) If true, the consumer will receive notifications synchronously. true boolean defaultValue (producer) Set a specific default value for some producer operations. Object key (producer) Set a specific key for producer operations. Object lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean oldValue (producer) Set a specific old value for some producer operations. Object operation (producer) The operation to perform. Enum values: * PUT * PUTASYNC * PUTALL * PUTALLASYNC * PUTIFABSENT * PUTIFABSENTASYNC * GET * GETORDEFAULT * CONTAINSKEY * CONTAINSVALUE * REMOVE * REMOVEASYNC * REPLACE * REPLACEASYNC * SIZE * CLEAR * CLEARASYNC * QUERY * STATS * COMPUTE * COMPUTEASYNC PUT InfinispanOperation value* (producer) Set a specific value for producer operations. Object autowiredEnabled (advanced) Whether auto-wiring is enabled. This is used for automatic auto-wiring options (the option must be marked as auto-wired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean cacheContainer (advanced) Autowired Specifies the cache Container to connect. EmbeddedCacheManager cacheContainerConfiguration (advanced) Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. Configuration configurationUri (advanced) An implementation specific URI for the CacheManager. String flags (advanced) A comma separated list of org.infinispan.context.Flag to be applied by default on each cache invocation. String remappingFunction (advanced) Set a specific remappingFunction to use in a compute operation. BiFunction resultHeader (advanced) Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String 51.5. Endpoint Options The Infinispan Embedded endpoint is configured using URI syntax. Following are the path and query parameters. 51.5.1. Path Parameters (1 parameters) Name Description Default Type cacheName (common) Required The name of the cache to use. Use current to use the existing cache name from the currently configured cached manager. Or use default for the default cache manager name. String 51.5.2. Query Parameters (20 parameters) Name Description Default Type queryBuilder (common) Specifies the query builder. InfinispanQueryBuilder clusteredListener (consumer) If true, the listener will be installed for the entire cluster. false boolean customListener (consumer) Returns the custom listener in use, if provided. InfinispanEmbeddedCustomListener eventTypes (consumer) Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CACHE_ENTRY_ACTIVATED, CACHE_ENTRY_PASSIVATED, CACHE_ENTRY_VISITED, CACHE_ENTRY_LOADED, CACHE_ENTRY_EVICTED, CACHE_ENTRY_CREATED, CACHE_ENTRY_REMOVED, CACHE_ENTRY_MODIFIED, TRANSACTION_COMPLETED, TRANSACTION_REGISTERED, CACHE_ENTRY_INVALIDATED, CACHE_ENTRY_EXPIRED, DATA_REHASHED, TOPOLOGY_CHANGED, PARTITION_STATUS_CHANGED, PERSISTENCE_AVAILABILITY_CHANGED. String sync (consumer) If true, the consumer will receive notifications synchronously. true boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: * InOnly * InOut * InOptionalOut ExchangePattern defaultValue (producer) Set a specific default value for some producer operations. Object key (producer) Set a specific key for producer operations. Object oldValue (producer) Set a specific old value for some producer operations. Object operation (producer) The operation to perform. Enum values: * PUT * PUTASYNC * PUTALL * PUTALLASYNC * PUTIFABSENT * PUTIFABSENTASYNC * GET * GETORDEFAULT * CONTAINSKEY * CONTAINSVALUE * REMOVE * REMOVEASYNC * REPLACE * REPLACEASYNC * SIZE * CLEAR * CLEARASYNC * QUERY * STATS * COMPUTE * COMPUTEASYNC PUT InfinispanOperation value (producer) Set a specific value for producer operations. Object lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean cacheContainer (advanced) Autowired Specifies the cache Container to connect. EmbeddedCacheManager cacheContainerConfiguration (advanced) Autowired The CacheContainer configuration. Used if the cacheContainer is not defined. Configuration configurationUri (advanced) An implementation specific URI for the CacheManager. String flags (advanced) A comma separated list of org.infinispan.context.Flag to be applied by default on each cache invocation. String remappingFunction (advanced) Set a specific remappingFunction to use in a compute operation. BiFunction resultHeader (advanced) Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String 51.6. Message Headers The Infinispan Embedded component supports 22 message headers that are listed below. Name Description Default Type CamelInfinispanEventType (consumer) Constant: EVENT_TYPE The type of the received event. String CamelInfinispanIsPre (consumer) Constant: IS_PRE true if the notification is before the event has occurred, false if after the event has occurred. boolean CamelInfinispanCacheName (common) Constant: CACHE_NAME The cache participating in the operation or event. String CamelInfinispanKey (common) Constant: KEY The key to perform the operation to or the key generating the event. Object CamelInfinispanValue (producer) Constant: VALUE The value to use for the operation. Object CamelInfinispanDefaultValue (producer) Constant: DEFAULT_VALUE The default value to use for a getOrDefault. Object CamelInfinispanOldValue (producer) Constant: OLD_VALUE The old value to use for a replace. Object CamelInfinispanMap (producer) Constant: MAP A Map to use in case of CamelInfinispanOperationPutAll operation. Map CamelInfinispanOperation (producer) Constant: OPERATION The operation to perform. Enum values: * PUT * PUTASYNC * PUTALL * PUTALLASYNC * PUTIFABSENT * PUTIFABSENTASYNC * GET * GETORDEFAULT * CONTAINSKEY * CONTAINSVALUE * REMOVE * REMOVEASYNC * REPLACE * REPLACEASYNC * SIZE * CLEAR * CLEARASYNC * QUERY * STATS * COMPUTE * COMPUTEASYNC InfinispanOperation CamelInfinispanOperationResult (producer) Constant: RESULT The name of the header whose value is the result. String CamelInfinispanOperationResultHeader (producer) Constant: RESULT_HEADER Store the operation result in a header instead of the message body. String CamelInfinispanLifespanTime (producer) Constant: LIFESPAN_TIME The Lifespan time of a value inside the cache. Negative values are interpreted as infinity. long CamelInfinispanTimeUnit (producer) Constant: LIFESPAN_TIME_UNIT The Time Unit of an entry Lifespan Time. Enum values: * NANOSECONDS * MICROSECONDS * MILLISECONDS * SECONDS * MINUTES * HOURS * DAYS TimeUnit CamelInfinispanMaxIdleTime (producer) Constant: MAX_IDLE_TIME The maximum amount of time an entry is allowed to be idle for before it is considered as expired. long CamelInfinispanMaxIdleTimeUnit (producer) Constant: MAX_IDLE_TIME_UNIT The Time Unit of an entry Max Idle Time. Enum values: * NANOSECONDS * MICROSECONDS * MILLISECONDS * SECONDS * MINUTES * HOURS * DAYS TimeUnit CamelInfinispanIgnoreReturnValues (consumer) Constant: IGNORE_RETURN_VALUES Signals that write operation's return value are ignored, so reading the existing value from a store or from a remote node is not necessary. false boolean CamelInfinispanEventData (consumer) Constant: EVENT_DATA The event data. Object CamelInfinispanQueryBuilder (producer) Constant: QUERY_BUILDER The QueryBuilder to use for QUERY command, if not present the command defaults to InifinispanConfiguration's one. InfinispanQueryBuilder CamelInfinispanCommandRetried (consumer) Constant: COMMAND_RETRIED This will be true if the write command that caused this had to be retried again due to a topology change. boolean CamelInfinispanEntryCreated (consumer) Constant: ENTRY_CREATED Indicates whether the cache entry modification event is the result of the cache entry being created. boolean CamelInfinispanOriginLocal (consumer) Constant: ORIGIN_LOCAL true if the call originated on the local cache instance; false if originated from a remote one. boolean CamelInfinispanCurrentState (consumer) Constant: CURRENT_STATE True if this event is generated from an existing entry as the listener has Listener. boolean 51.7. Camel Operations This section lists all available operations along with their header information. Table 51.1. Table 1. Put Operations Operation Name Description InfinispanOperation.PUT Puts a key/value pair in the cache, optionally with expiration InfinispanOperation.PUTASYNC Asynchronously puts a key/value pair in the cache, optionally with expiration InfinispanOperation.PUTIFABSENT Puts a key/value pair in the cache if it did not exist, optionally with expiration InfinispanOperation.PUTIFABSENTASYNC Asynchronously puts a key/value pair in the cache if it did not exist, optionally with expiration Required Headers : CamelInfinispanKey CamelInfinispanValue Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Result Header : CamelInfinispanOperationResult Table 51.2. Table 2. Put All Operations Operation Name Description InfinispanOperation.PUTALL Adds multiple entries to a cache, optionally with expiration CamelInfinispanOperation.PUTALLASYNC Asynchronously adds multiple entries to a cache, optionally with expiration Required Headers : CamelInfinispanMap Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Table 51.3. Table 3. Get Operations Operation Name Description InfinispanOperation.GET Retrieves the value associated with a specific key from the cache InfinispanOperation.GETORDEFAULT Retrieves the value, or default value, associated with a specific key from the cache Required Headers : CamelInfinispanKey Table 51.4. Table 4. Contains Key Operation Operation Name Description InfinispanOperation.CONTAINSKEY Determines whether a cache contains a specific key Required Headers CamelInfinispanKey Result Header CamelInfinispanOperationResult Table 51.5. Table 5. Contains Value Operation Operation Name Description InfinispanOperation.CONTAINSVALUE Determines whether a cache contains a specific value Required Headers : CamelInfinispanKey Table 51.6. Table 6. Remove Operations Operation Name Description InfinispanOperation.REMOVE Removes an entry from a cache, optionally only if the value matches a given one InfinispanOperation.REMOVEASYNC Asynchronously removes an entry from a cache, optionally only if the value matches a given one Required Headers : CamelInfinispanKey Optional Headers : CamelInfinispanValue Result Header : CamelInfinispanOperationResult Table 51.7. Table 7. Replace Operations Operation Name Description InfinispanOperation.REPLACE Conditionally replaces an entry in the cache, optionally with expiration InfinispanOperation.REPLACEASYNC Asynchronously conditionally replaces an entry in the cache, optionally with expiration Required Headers : CamelInfinispanKey CamelInfinispanValue CamelInfinispanOldValue Optional Headers : CamelInfinispanLifespanTime CamelInfinispanLifespanTimeUnit CamelInfinispanMaxIdleTime CamelInfinispanMaxIdleTimeUnit Result Header : CamelInfinispanOperationResult Table 51.8. Table 8. Clear Operations Operation Name Description InfinispanOperation.CLEAR Clears the cache InfinispanOperation.CLEARASYNC Asynchronously clears the cache Table 51.9. Table 9. Size Operation Operation Name Description InfinispanOperation.SIZE Returns the number of entries in the cache Result Header CamelInfinispanOperationResult Table 51.10. Table 10. Stats Operation Operation Name Description InfinispanOperation.STATS Returns statistics about the cache Result Header : CamelInfinispanOperationResult Table 51.11. Table 11. Query Operation Operation Name Description InfinispanOperation.QUERY Executes a query on the cache Required Headers : CamelInfinispanQueryBuilder Result Header : CamelInfinispanOperationResult Note Write methods like put(key, value) and remove(key) do not return the value by default. 51.8. Examples Put a key/value into a named cache: from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) (1) .setHeader(InfinispanConstants.KEY).constant("123") (2) .to("infinispan:myCacheName&cacheContainer=#cacheContainer"); (3) Set the operation to perform Set the key used to identify the element in the cache Use the configured cache manager cacheContainer from the registry to put an element to the cache named myCacheName It is possible to configure the lifetime and/or the idle time before the entry expires and gets evicted from the cache, as example. from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant("123") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) (1) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) (2) .to("infinispan:myCacheName"); Set the lifespan of the entry Set the time unit for the lifespan Queries from("direct:start") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { @Override public Query build(QueryFactory<Query> qf) { return qf.from(User.class).having("name").like("%abc%").build(); } }) .to("infinispan:myCacheName?cacheContainer=#cacheManager") ; Custom Listeners from("infinispan://?cacheContainer=#cacheManager&customListener=#myCustomListener") .to("mock:result"); The instance of myCustomListener must exist and Camel should be able to look it up from the Registry . Users are encouraged to extend the org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedCustomListener class and annotate the resulting class with @Listener which can be found in package org.infinispan.notifications . 51.9. Using the Infinispan based idempotent repository Java Example InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri("classpath:infinispan.xml") InfinispanEmbeddedIdempotentRepository repo = new InfinispanEmbeddedIdempotentRepository("idempotent"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .idempotentConsumer(header("MessageID"), repo) (3) .to("mock:result"); } }); Configure the cache Configure the repository bean Set the repository to the route XML Example <bean id="infinispanRepo" class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedIdempotentRepository" destroy-method="stop"> <constructor-arg value="idempotent"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration"> <property name="configurationUrl" value="classpath:infinispan.xml"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <idempotentConsumer idempotentRepository="infinispanRepo"> (3) <header>MessageID</header> <to uri="mock:result" /> </idempotentConsumer> </route> </camelContext> Set the name of the cache that will be used by the repository Configure the repository bean Set the repository to the route 51.10. Using the Infinispan based aggregation repository Java Example InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri("classpath:infinispan.xml") InfinispanEmbeddedAggregationRepository repo = new InfinispanEmbeddedAggregationRepository("aggregation"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from("direct:start") .aggregate(header("MessageID")) .completionSize(3) .aggregationRepository(repo) (3) .aggregationStrategy("myStrategy") .to("mock:result"); } }); Configure the cache Create the repository bean Set the repository to the route XML Example <bean id="infinispanRepo" class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedAggregationRepository" destroy-method="stop"> <constructor-arg value="aggregation"/> (1) <property name="configuration"> (2) <bean class="org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration"> <property name="configurationUrl" value="classpath:infinispan.xml"/> </bean> </property> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start" /> <aggregate aggregationStrategy="myStrategy" completionSize="3" aggregationRepository="infinispanRepo"> (3) <correlationExpression> <header>MessageID</header> </correlationExpression> <to uri="mock:result"/> </aggregate> </route> </camelContext> Set the name of the cache that will be used by the repository Configure the repository bean Set the repository to the route Note With the release of Infinispan 11, it is required to set the encoding configuration on any cache created. This is critical for consuming events too. For more information have a look at Data Encoding and MediaTypes in the official Infinispan documentation. 51.11. Spring Boot Auto-Configuration The component supports 17 options that are listed below. Name Description Default Type camel.component.infinispan-embedded.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.infinispan-embedded.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.infinispan-embedded.cache-container Specifies the cache Container to connect. The option is a org.infinispan.manager.EmbeddedCacheManager type. EmbeddedCacheManager camel.component.infinispan-embedded.cache-container-configuration The CacheContainer configuration. Used if the cacheContainer is not defined. The option is a org.infinispan.configuration.cache.Configuration type. Configuration camel.component.infinispan-embedded.clustered-listener If true, the listener will be installed for the entire cluster. false Boolean camel.component.infinispan-embedded.configuration Component configuration. The option is a org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration type. InfinispanEmbeddedConfiguration camel.component.infinispan-embedded.configuration-uri An implementation specific URI for the CacheManager. String camel.component.infinispan-embedded.custom-listener Returns the custom listener in use, if provided. The option is a org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedCustomListener type. InfinispanEmbeddedCustomListener camel.component.infinispan-embedded.enabled Whether to enable auto configuration of the infinispan-embedded component. This is enabled by default. Boolean camel.component.infinispan-embedded.event-types Specifies the set of event types to register by the consumer.Multiple event can be separated by comma. The possible event types are: CACHE_ENTRY_ACTIVATED, CACHE_ENTRY_PASSIVATED, CACHE_ENTRY_VISITED, CACHE_ENTRY_LOADED, CACHE_ENTRY_EVICTED, CACHE_ENTRY_CREATED, CACHE_ENTRY_REMOVED, CACHE_ENTRY_MODIFIED, TRANSACTION_COMPLETED, TRANSACTION_REGISTERED, CACHE_ENTRY_INVALIDATED, CACHE_ENTRY_EXPIRED, DATA_REHASHED, TOPOLOGY_CHANGED, PARTITION_STATUS_CHANGED, PERSISTENCE_AVAILABILITY_CHANGED. String camel.component.infinispan-embedded.flags A comma separated list of org.infinispan.context.Flag to be applied by default on each cache invocation. String camel.component.infinispan-embedded.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.infinispan-embedded.operation The operation to perform. InfinispanOperation camel.component.infinispan-embedded.query-builder Specifies the query builder. The option is a org.apache.camel.component.infinispan.InfinispanQueryBuilder type. InfinispanQueryBuilder camel.component.infinispan-embedded.remapping-function Set a specific remappingFunction to use in a compute operation. The option is a java.util.function.BiFunction type. BiFunction camel.component.infinispan-embedded.result-header Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader. String camel.component.infinispan-embedded.sync If true, the consumer will receive notifications synchronously. true Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-infinispan-embedded-starter</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "infinispan-embedded://cacheName?[options]", "infinispan-embedded:cacheName", "from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) (1) .setHeader(InfinispanConstants.KEY).constant(\"123\") (2) .to(\"infinispan:myCacheName&cacheContainer=#cacheContainer\"); (3)", "from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant(\"123\") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) (1) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) (2) .to(\"infinispan:myCacheName\");", "from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION, InfinispanConstants.QUERY) .setHeader(InfinispanConstants.QUERY_BUILDER, new InfinispanQueryBuilder() { @Override public Query build(QueryFactory<Query> qf) { return qf.from(User.class).having(\"name\").like(\"%abc%\").build(); } }) .to(\"infinispan:myCacheName?cacheContainer=#cacheManager\") ;", "from(\"infinispan://?cacheContainer=#cacheManager&customListener=#myCustomListener\") .to(\"mock:result\");", "InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri(\"classpath:infinispan.xml\") InfinispanEmbeddedIdempotentRepository repo = new InfinispanEmbeddedIdempotentRepository(\"idempotent\"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from(\"direct:start\") .idempotentConsumer(header(\"MessageID\"), repo) (3) .to(\"mock:result\"); } });", "<bean id=\"infinispanRepo\" class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedIdempotentRepository\" destroy-method=\"stop\"> <constructor-arg value=\"idempotent\"/> (1) <property name=\"configuration\"> (2) <bean class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration\"> <property name=\"configurationUrl\" value=\"classpath:infinispan.xml\"/> </bean> </property> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\" /> <idempotentConsumer idempotentRepository=\"infinispanRepo\"> (3) <header>MessageID</header> <to uri=\"mock:result\" /> </idempotentConsumer> </route> </camelContext>", "InfinispanEmbeddedConfiguration conf = new InfinispanEmbeddedConfiguration(); (1) conf.setConfigurationUri(\"classpath:infinispan.xml\") InfinispanEmbeddedAggregationRepository repo = new InfinispanEmbeddedAggregationRepository(\"aggregation\"); (2) repo.setConfiguration(conf); context.addRoutes(new RouteBuilder() { @Override public void configure() { from(\"direct:start\") .aggregate(header(\"MessageID\")) .completionSize(3) .aggregationRepository(repo) (3) .aggregationStrategy(\"myStrategy\") .to(\"mock:result\"); } });", "<bean id=\"infinispanRepo\" class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedAggregationRepository\" destroy-method=\"stop\"> <constructor-arg value=\"aggregation\"/> (1) <property name=\"configuration\"> (2) <bean class=\"org.apache.camel.component.infinispan.embedded.InfinispanEmbeddedConfiguration\"> <property name=\"configurationUrl\" value=\"classpath:infinispan.xml\"/> </bean> </property> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\" /> <aggregate aggregationStrategy=\"myStrategy\" completionSize=\"3\" aggregationRepository=\"infinispanRepo\"> (3) <correlationExpression> <header>MessageID</header> </correlationExpression> <to uri=\"mock:result\"/> </aggregate> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-infinispan-embedded-component
Console APIs
Console APIs OpenShift Container Platform 4.13 Reference guide for console APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/console_apis/index
Migrating Red Hat Update Infrastructure
Migrating Red Hat Update Infrastructure Red Hat Update Infrastructure 4 Migrating to Red Hat Update Infrastructure 4 and upgrading to the latest version of Red Hat Update Infrastructure Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_update_infrastructure/4/html/migrating_red_hat_update_infrastructure/index
Chapter 3. Project storage and build options with Red Hat Decision Manager
Chapter 3. Project storage and build options with Red Hat Decision Manager As you develop a Red Hat Decision Manager project, you need to be able to track the versions of your project with a version-controlled repository, manage your project assets in a stable environment, and build your project for testing and deployment. You can use Business Central for all of these tasks, or use a combination of Business Central and external tools and repositories. Red Hat Decision Manager supports Git repositories for project version control, Apache Maven for project management, and a variety of Maven-based, Java-based, or custom-tool-based build options. The following options are the main methods for Red Hat Decision Manager project versioning, storage, and building: Table 3.1. Project version control options (Git) Versioning option Description Documentation Business Central Git VFS Business Central contains a built-in Git Virtual File System (VFS) that stores all processes, rules, and other artifacts that you create in the authoring environment. Git is a distributed version control system that implements revisions as commit objects. When you commit your changes into a repository, a new commit object in the Git repository is created. When you create a project in Business Central, the project is added to the Git repository connected to Business Central. NA External Git repository If you have Red Hat Decision Manager projects in Git repositories outside of Business Central, you can import them into Red Hat Decision Manager spaces and use Git hooks to synchronize the internal and external Git repositories. Managing projects in Business Central Table 3.2. Project management options (Maven) Management option Description Documentation Business Central Maven repository Business Central contains a built-in Maven repository that organizes and builds project assets that you create in the authoring environment. Maven is a distributed build-automation tool that uses repositories to store Java libraries, plug-ins, and other build artifacts. When building projects and archetypes, Maven dynamically retrieves Java libraries and Maven plug-ins from local or remote repositories to promote shared dependencies across projects. Note For a production environment, consider using an external Maven repository configured with Business Central. NA External Maven repository If you have Red Hat Decision Manager projects in an external Maven repository, such as Nexus or Artifactory, you can create a settings.xml file with connection details and add that file path to the kie.maven.settings.custom property in your project standalone-full.xml file. Maven Settings Reference Packaging and deploying an Red Hat Decision Manager project Table 3.3. Project build options Build option Description Documentation Business Central (KJAR) Business Central builds Red Hat Decision Manager projects stored in either the built-in Maven repository or a configured external Maven repository. Projects in Business Central are packaged automatically as knowledge JAR (KJAR) files with all components needed for deployment when you build the projects. Packaging and deploying an Red Hat Decision Manager project Standalone Maven project (KJAR) If you have a standalone Red Hat Decision Manager Maven project outside of Business Central, you can edit the project pom.xml file to package your project as a KJAR file, and then add a kmodule.xml file with the KIE base and KIE session configurations needed to build the project. Packaging and deploying an Red Hat Decision Manager project Embedded Java application (KJAR) If you have an embedded Java application from which you want to build your Red Hat Decision Manager project, you can use a KieModuleModel instance to programmatically create a kmodule.xml file with the KIE base and KIE session configurations, and then add all resources in your project to the KIE virtual file system KieFileSystem to build the project. Packaging and deploying an Red Hat Decision Manager project CI/CD tool (KJAR) If you use a tool for continuous integration and continuous delivery (CI/CD), you can configure the tool set to integrate with your Red Hat Decision Manager Git repositories to build a specified project. Ensure that your projects are packaged and built as KJAR files to ensure optimal deployment. NA S2I in OpenShift (container image) If you use Red Hat Decision Manager on Red Hat OpenShift Container Platform, you can build your Red Hat Decision Manager projects as KJAR files in the typical way or use Source-to-Image (S2I) to build your projects as container images. S2I is a framework and a tool that allows you to write images that use the application source code as an input and produce a new image that runs the assembled application as an output. The main advantage of using the S2I tool for building reproducible container images is the ease of use for developers. The Red Hat Decision Manager images build the KJAR files as S2I automatically, using the source from a Git repository that you can specify. You do not need to create scripts or manage an S2I build. For the S2I concept: Images in the Red Hat OpenShift Container Platform product documentation. For the operator-based deployment process: Deploying an Red Hat Decision Manager environment on Red Hat OpenShift Container Platform 4 using Operators . In the KIE Server settings, add a KIE Server instance and then click Set Immutable server configuration to configure the source Git repository for an S2I deployment.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_decision_manager/project-storage-version-build-options-ref_decision-management-architecture
Chapter 1. Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator
Chapter 1. Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator You can install Red Hat Developer Hub on OpenShift Container Platform by using the Red Hat Developer Hub Operator in the OpenShift Container Platform console. 1.1. Installing the Red Hat Developer Hub Operator As an administrator, you can install the Red Hat Developer Hub Operator. Authorized users can use the Operator to install Red Hat Developer Hub on the following platforms: Red Hat OpenShift Container Platform (OpenShift Container Platform) Amazon Elastic Kubernetes Service (EKS) Microsoft Azure Kubernetes Service (AKS) For more information on OpenShift Container Platform supported versions, see the Red Hat Developer Hub Life Cycle . Containers are available for the following CPU architectures: AMD64 and Intel 64 ( x86_64 ) Prerequisites You are logged in as an administrator on the OpenShift Container Platform web console. You have configured the appropriate roles and permissions within your project to create or access an application. For more information, see the Red Hat OpenShift Container Platform documentation on Building applications . Important For enhanced security, better control over the Operator lifecycle, and preventing potential privilege escalation, install the Red Hat Developer Hub Operator in a dedicated default rhdh-operator namespace. You can restrict other users' access to the Operator resources through role bindings or cluster role bindings. You can also install the Operator in another namespace by creating the necessary resources, such as an Operator group. For more information, see Installing global Operators in custom namespaces . However, if the Red Hat Developer Hub Operator shares a namespace with other Operators, then it shares the same update policy as well, preventing the customization of the update policy. For example, if one Operator is set to manual updates, the Red Hat Developer Hub Operator update policy is also set to manual. For more information, see Colocation of Operators in a namespace . Procedure In the Administrator perspective of the OpenShift Container Platform web console, click Operators > OperatorHub . In the Filter by keyword box, enter Developer Hub and click the Red Hat Developer Hub Operator card. On the Red Hat Developer Hub Operator page, click Install . On the Install Operator page, use the Update channel drop-down menu to select the update channel that you want to use: The fast channel provides y-stream (x.y) and z-stream (x.y.z) updates, for example, updating from version 1.1 to 1.2, or from 1.1.0 to 1.1.1. Important The fast channel includes all of the updates available for a particular version. Any update might introduce unexpected changes in your Red Hat Developer Hub deployment. Check the release notes for details about any potentially breaking changes. The fast-1.1 channel only provides z-stream updates, for example, updating from version 1.1.1 to 1.1.2. If you want to update the Red Hat Developer Hub y-version in the future, for example, updating from 1.1 to 1.2, you must switch to the fast channel manually. On the Install Operator page, choose the Update approval strategy for the Operator: If you choose the Automatic option, the Operator is updated without requiring manual confirmation. If you choose the Manual option, a notification opens when a new update is released in the update channel. The update must be manually approved by an administrator before installation can begin. Click Install . Verification To view the installed Red Hat Developer Hub Operator, click View Operator . Additional resources Deploying Red Hat Developer Hub on OpenShift Container Platform with the Operator Installing from OperatorHub using the web console 1.2. Deploying Red Hat Developer Hub on OpenShift Container Platform with the Operator As a developer, you can deploy a Red Hat Developer Hub instance on OpenShift Container Platform by using the Developer Catalog in the Red Hat OpenShift Container Platform web console. This deployment method uses the Red Hat Developer Hub Operator. Prerequisites A cluster administrator has installed the Red Hat Developer Hub Operator. For more information, see Section 1.1, "Installing the Red Hat Developer Hub Operator" . You have added a custom configuration file to OpenShift Container Platform. For more information, see Adding a custom configuration file to OpenShift Container Platform . Procedure Create a project in OpenShift Container Platform for your Red Hat Developer Hub instance, or select an existing project. Tip For more information about creating a project in OpenShift Container Platform, see Creating a project by using the web console in the Red Hat OpenShift Container Platform documentation. From the Developer perspective on the OpenShift Container Platform web console, click +Add . From the Developer Catalog panel, click Operator Backed . In the Filter by keyword box, enter Developer Hub and click the Red Hat Developer Hub card. Click Create . Add custom configurations for the Red Hat Developer Hub instance. On the Create Backstage page, click Create Verification After the pods are ready, you can access the Red Hat Developer Hub platform by opening the URL. Confirm that the pods are ready by clicking the pod in the Topology view and confirming the Status in the Details panel. The pod status is Active when the pod is ready. From the Topology view, click the Open URL icon on the Developer Hub pod. Additional resources OpenShift Container Platform - Building applications overview
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_red_hat_developer_hub_on_openshift_container_platform/assembly-install-rhdh-ocp-operator
Chapter 7. Virtual machines
Chapter 7. Virtual machines 7.1. Creating VMs from Red Hat images 7.1.1. Creating virtual machines from Red Hat images overview Red Hat images are golden images . They are published as container disks in a secure registry. The Containerized Data Importer (CDI) polls and imports the container disks into your cluster and stores them in the openshift-virtualization-os-images project as snapshots or persistent volume claims (PVCs). Red Hat images are automatically updated. You can disable and re-enable automatic updates for these images. See Managing Red Hat boot source updates . Cluster administrators can enable automatic subscription for Red Hat Enterprise Linux (RHEL) virtual machines in the OpenShift Virtualization web console . You can create virtual machines (VMs) from operating system images provided by Red Hat by using one of the following methods: Creating a VM from a template by using the web console Creating a VM from an instance type by using the web console Creating a VM from a VirtualMachine manifest by using the command line Important Do not create VMs in the default openshift-* namespaces. Instead, create a new namespace or use an existing namespace without the openshift prefix. 7.1.1.1. About golden images A golden image is a preconfigured snapshot of a virtual machine (VM) that you can use as a resource to deploy new VMs. For example, you can use golden images to provision the same system environment consistently and deploy systems more quickly and efficiently. 7.1.1.1.1. How do golden images work? Golden images are created by installing and configuring an operating system and software applications on a reference machine or virtual machine. This includes setting up the system, installing required drivers, applying patches and updates, and configuring specific options and preferences. After the golden image is created, it is saved as a template or image file that can be replicated and deployed across multiple clusters. The golden image can be updated by its maintainer periodically to incorporate necessary software updates and patches, ensuring that the image remains up to date and secure, and newly created VMs are based on this updated image. 7.1.1.1.2. Red Hat implementation of golden images Red Hat publishes golden images as container disks in the registry for versions of Red Hat Enterprise Linux (RHEL). Container disks are virtual machine images that are stored as a container image in a container image registry. Any published image will automatically be made available in connected clusters after the installation of OpenShift Virtualization. After the images are available in a cluster, they are ready to use to create VMs. 7.1.1.2. About VM boot sources Virtual machines (VMs) consist of a VM definition and one or more disks that are backed by data volumes. VM templates enable you to create VMs using predefined specifications. Every template requires a boot source, which is a fully configured disk image including configured drivers. Each template contains a VM definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source. Provided boot sources are updated automatically to the latest version of the operating system. For auto-updated boot sources, persistent volume claims (PVCs) and volume snapshots are created with the cluster's default storage class. If you select a different default storage class after configuration, you must delete the existing boot sources in the cluster namespace that are configured with the default storage class. 7.1.2. Creating virtual machines from instance types You can simplify virtual machine (VM) creation by using instance types, whether you use the OpenShift Container Platform web console or the CLI to create VMs. 7.1.2.1. About instance types An instance type is a reusable object where you can define resources and characteristics to apply to new VMs. You can define custom instance types or use the variety that are included when you install OpenShift Virtualization. To create a new instance type, you must first create a manifest, either manually or by using the virtctl CLI tool. You then create the instance type object by applying the manifest to your cluster. OpenShift Virtualization provides two CRDs for configuring instance types: A namespaced object: VirtualMachineInstancetype A cluster-wide object: VirtualMachineClusterInstancetype These objects use the same VirtualMachineInstancetypeSpec . 7.1.2.1.1. Required attributes When you configure an instance type, you must define the cpu and memory attributes. Other attributes are optional. Note When you create a VM from an instance type, you cannot override any parameters defined in the instance type. Because instance types require defined CPU and memory attributes, OpenShift Virtualization always rejects additional requests for these resources when creating a VM from an instance type. You can manually create an instance type manifest. For example: Example YAML file with required fields apiVersion: instancetype.kubevirt.io/v1beta1 kind: VirtualMachineInstancetype metadata: name: example-instancetype spec: cpu: guest: 1 1 memory: guest: 128Mi 2 1 Required. Specifies the number of vCPUs to allocate to the guest. 2 Required. Specifies an amount of memory to allocate to the guest. You can create an instance type manifest by using the virtctl CLI utility. For example: Example virtctl command with required fields USD virtctl create instancetype --cpu 2 --memory 256Mi where: --cpu <value> Specifies the number of vCPUs to allocate to the guest. Required. --memory <value> Specifies an amount of memory to allocate to the guest. Required. Tip You can immediately create the object from the new manifest by running the following command: USD virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f - 7.1.2.1.2. Optional attributes In addition to the required cpu and memory attributes, you can include the following optional attributes in the VirtualMachineInstancetypeSpec : annotations List annotations to apply to the VM. gpus List vGPUs for passthrough. hostDevices List host devices for passthrough. ioThreadsPolicy Define an IO threads policy for managing dedicated disk access. launchSecurity Configure Secure Encrypted Virtualization (SEV). nodeSelector Specify node selectors to control the nodes where this VM is scheduled. schedulerName Define a custom scheduler to use for this VM instead of the default scheduler. 7.1.2.2. Pre-defined instance types OpenShift Virtualization includes a set of pre-defined instance types called common-instancetypes . Some are specialized for specific workloads and others are workload-agnostic. These instance type resources are named according to their series, version, and size. The size value follows the . delimiter and ranges from nano to 8xlarge . Table 7.1. common-instancetypes series comparison Use case Series Characteristics vCPU to memory ratio Example resource Universal U Burstable CPU performance 1:4 u1.medium 1 vCPUs 4 Gi memory Overcommitted O Overcommitted memory Burstable CPU performance 1:4 o1.small 1 vCPU 2Gi memory Compute-exclusive CX Hugepages Dedicated CPU Isolated emulator threads vNUMA 1:2 cx1.2xlarge 8 vCPUs 16Gi memory NVIDIA GPU GN For VMs that use GPUs provided by the NVIDIA GPU Operator Has predefined GPUs Burstable CPU performance 1:4 gn1.8xlarge 32 vCPUs 128Gi memory Memory-intensive M Hugepages Burstable CPU performance 1:8 m1.large 2 vCPUs 16Gi memory Network-intensive N Hugepages Dedicated CPU Isolated emulator threads Requires nodes capable of running DPDK workloads 1:2 n1.medium 4 vCPUs 4Gi memory 7.1.2.3. Creating manifests by using the virtctl tool You can use the virtctl CLI utility to simplify creating manifests for VMs, VM instance types, and VM preferences. For more information, see VM manifest creation commands . If you have a VirtualMachine manifest, you can create a VM from the command line . 7.1.2.4. Creating a VM from an instance type by using the web console You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM. Procedure In the web console, navigate to Virtualization Catalog and click the InstanceTypes tab. Select either of the following options: Select a bootable volume. Note The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list. Click Add volume to upload a new volume or use an existing persistent volume claim (PVC), volume snapshot, or data source. Then click Save . Click an instance type tile and select the resource size appropriate for your workload. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 7.1.3. Creating virtual machines from templates You can create virtual machines (VMs) from Red Hat templates by using the OpenShift Container Platform web console. 7.1.3.1. About VM templates Boot sources You can expedite VM creation by using templates that have an available boot source. Templates with a boot source are labeled Available boot source if they do not have a custom label. Templates without a boot source are labeled Boot source required . See Creating virtual machines from custom images . Customization You can customize the disk source and VM parameters before you start the VM: See storage volume types and storage fields for details about disk source settings. See the Overview , YAML , and Configuration tab documentation for details about VM settings. Note If you copy a VM template with all its labels and annotations, your version of the template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove this designation. See Customizing a VM template by using the web console . Single-node OpenShift Due to differences in storage behavior, some templates are incompatible with single-node OpenShift. To ensure compatibility, do not set the evictionStrategy field for templates or VMs that use data volumes or storage profiles. 7.1.3.2. Creating a VM from a template You can create a virtual machine (VM) from a template with an available boot source by using the OpenShift Container Platform web console. Optional: You can customize template or VM parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. Procedure Navigate to Virtualization Catalog in the web console. Click Boot source available to filter templates with boot sources. The catalog displays the default templates. Click All Items to view all available templates for your filters. Click a template tile to view its details. Click Quick create VirtualMachine to create a VM from the template. Optional: Customize the template or VM parameters: Click Customize VirtualMachine . Expand Storage or Optional parameters to edit data source settings. Click Customize VirtualMachine parameters . The Customize and create VirtualMachine pane displays the Overview , YAML , Scheduling , Environment , Network interfaces , Disks , Scripts , and Metadata tabs. Edit the parameters that must be set before the VM boots, such as cloud-init or a static SSH key. Click Create VirtualMachine . The VirtualMachine details page displays the provisioning status. 7.1.3.2.1. Storage volume types Table 7.2. Storage volume types Type Description ephemeral A local copy-on-write (COW) image that uses a network volume as a read-only backing store. The backing volume must be a PersistentVolumeClaim . The ephemeral image is created when the virtual machine starts and stores all writes locally. The ephemeral image is discarded when the virtual machine is stopped, restarted, or deleted. The backing volume (PVC) is not mutated in any way. persistentVolumeClaim Attaches an available PV to a virtual machine. Attaching a PV allows for the virtual machine data to persist between sessions. Importing an existing virtual machine disk into a PVC by using CDI and attaching the PVC to a virtual machine instance is the recommended method for importing existing virtual machines into OpenShift Container Platform. There are some requirements for the disk to be used within a PVC. dataVolume Data volumes build on the persistentVolumeClaim disk type by managing the process of preparing the virtual machine disk via an import, clone, or upload operation. VMs that use this volume type are guaranteed not to start until the volume is ready. Specify type: dataVolume or type: "" . If you specify any other value for type , such as persistentVolumeClaim , a warning is displayed, and the virtual machine does not start. cloudInitNoCloud Attaches a disk that contains the referenced cloud-init NoCloud data source, providing user data and metadata to the virtual machine. A cloud-init installation is required inside the virtual machine disk. containerDisk References an image, such as a virtual machine disk, that is stored in the container image registry. The image is pulled from the registry and attached to the virtual machine as a disk when the virtual machine is launched. A containerDisk volume is not limited to a single virtual machine and is useful for creating large numbers of virtual machine clones that do not require persistent storage. Only RAW and QCOW2 formats are supported disk types for the container image registry. QCOW2 is recommended for reduced image size. Note A containerDisk volume is ephemeral. It is discarded when the virtual machine is stopped, restarted, or deleted. A containerDisk volume is useful for read-only file systems such as CD-ROMs or for disposable virtual machines. emptyDisk Creates an additional sparse QCOW2 disk that is tied to the life-cycle of the virtual machine interface. The data survives guest-initiated reboots in the virtual machine but is discarded when the virtual machine stops or is restarted from the web console. The empty disk is used to store application dependencies and data that otherwise exceeds the limited temporary file system of an ephemeral disk. The disk capacity size must also be provided. 7.1.3.2.2. Storage fields Field Description Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. If you do not specify these parameters, the system uses the default storage profile values. Parameter Option Parameter description Volume Mode Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. 7.1.3.2.3. Customizing a VM template by using the web console You can customize an existing virtual machine (VM) template by modifying the VM or template parameters, such as data sources, cloud-init, or SSH keys, before you start the VM. If you customize a template by copying it and including all of its labels and annotations, the customized template is marked as deprecated when a new version of the Scheduling, Scale, and Performance (SSP) Operator is deployed. You can remove the deprecated designation from the customized template. Procedure Navigate to Virtualization Templates in the web console. From the list of VM templates, click the template marked as deprecated. Click Edit to the pencil icon beside Labels . Remove the following two labels: template.kubevirt.io/type: "base" template.kubevirt.io/version: "version" Click Save . Click the pencil icon beside the number of existing Annotations . Remove the following annotation: template.kubevirt.io/deprecated Click Save . 7.1.3.2.4. Creating a custom VM template in the web console You create a virtual machine template by editing a YAML file example in the OpenShift Container Platform web console. Procedure In the web console, click Virtualization Templates in the side menu. Optional: Use the Project drop-down menu to change the project associated with the new template. All templates are saved to the openshift project by default. Click Create Template . Specify the template parameters by editing the YAML file. Click Create . The template is displayed on the Templates page. Optional: Click Download to download and save the YAML file. 7.1.4. Creating virtual machines from the command line You can create virtual machines (VMs) from the command line by editing or creating a VirtualMachine manifest. You can simplify VM configuration by using an instance type in your VM manifest. Note You can also create VMs from instance types by using the web console . 7.1.4.1. Creating manifests by using the virtctl tool You can use the virtctl CLI utility to simplify creating manifests for VMs, VM instance types, and VM preferences. For more information, see VM manifest creation commands . 7.1.4.2. Creating a VM from a VirtualMachine manifest You can create a virtual machine (VM) from a VirtualMachine manifest. Procedure Edit the VirtualMachine manifest for your VM. The following example configures a Red Hat Enterprise Linux (RHEL) VM: Note This example manifest does not configure VM authentication. Example manifest for a RHEL VM apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-9-minimal spec: dataVolumeTemplates: - metadata: name: rhel-9-minimal-volume spec: sourceRef: kind: DataSource name: rhel9 1 namespace: openshift-virtualization-os-images 2 storage: {} instancetype: name: u1.medium 3 preference: name: rhel.9 4 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: rhel-9-minimal-volume name: rootdisk 1 The rhel9 golden image is used to install RHEL 9 as the guest operating system. 2 Golden images are stored in the openshift-virtualization-os-images namespace. 3 The u1.medium instance type requests 1 vCPU and 4Gi memory for the VM. These resource values cannot be overridden within the VM. 4 The rhel.9 preference specifies additional attributes that support the RHEL 9 guest operating system. Create a virtual machine by using the manifest file: USD oc create -f <vm_manifest_file>.yaml Optional: Start the virtual machine: USD virtctl start <vm_name> -n <namespace> steps Configuring SSH access to virtual machines 7.2. Creating VMs from custom images 7.2.1. Creating virtual machines from custom images overview You can create virtual machines (VMs) from custom operating system images by using one of the following methods: Importing the image as a container disk from a registry . Optional: You can enable auto updates for your container disks. See Managing automatic boot source updates for details. Importing the image from a web page . Uploading the image from a local machine . Cloning a persistent volume claim (PVC) that contains the image . The Containerized Data Importer (CDI) imports the image into a PVC by using a data volume. You add the PVC to the VM by using the OpenShift Container Platform web console or command line. Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. You must also install VirtIO drivers on Windows VMs. The QEMU guest agent is included with Red Hat images. 7.2.2. Creating VMs by using container disks You can create virtual machines (VMs) by using container disks built from operating system images. You can enable auto updates for your container disks. See Managing automatic boot source updates for details. Important If the container disks are large, the I/O traffic might increase and cause worker nodes to be unavailable. You can perform the following tasks to resolve this issue: Pruning DeploymentConfig objects . Configuring garbage collection . You create a VM from a container disk by performing the following steps: Build an operating system image into a container disk and upload it to your container registry . If your container registry does not have TLS, configure your environment to disable TLS for your registry . Create a VM with the container disk as the disk source by using the web console or the command line . Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.2.2.1. Building and uploading a container disk You can build a virtual machine (VM) image into a container disk and upload it to a registry. The size of a container disk is limited by the maximum layer size of the registry where the container disk is hosted. Note For Red Hat Quay , you can change the maximum layer size by editing the YAML configuration file that is created when Red Hat Quay is first deployed. Prerequisites You must have podman installed. You must have a QCOW2 or RAW image file. Procedure Create a Dockerfile to build the VM image into a container image. The VM image must be owned by QEMU, which has a UID of 107 , and placed in the /disk/ directory inside the container. Permissions for the /disk/ directory must then be set to 0440 . The following example uses the Red Hat Universal Base Image (UBI) to handle these configuration changes in the first stage, and uses the minimal scratch image in the second stage to store the result: USD cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF 1 Where <vm_image> is the image in either QCOW2 or RAW format. If you use a remote image, replace <vm_image>.qcow2 with the complete URL. Build and tag the container: USD podman build -t <registry>/<container_disk_name>:latest . Push the container image to the registry: USD podman push <registry>/<container_disk_name>:latest 7.2.2.2. Disabling TLS for a container registry You can disable TLS (transport layer security) for one or more container registries by editing the insecureRegistries field of the HyperConverged custom resource. Prerequisites Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add a list of insecure registries to the spec.storageImport.insecureRegistries field. Example HyperConverged custom resource apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - "private-registry-example-1:5000" - "private-registry-example-2:5000" 1 Replace the examples in this list with valid registry hostnames. 7.2.2.3. Creating a VM from a container disk by using the web console You can create a virtual machine (VM) by importing a container disk from a container registry by using the OpenShift Container Platform web console. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select Registry (creates PVC) from the Disk source list. Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 Set the disk size. Click . Click Create VirtualMachine . 7.2.2.4. Creating a VM from a container disk by using the command line You can create a virtual machine (VM) from a container disk by using the command line. When the virtual machine (VM) is created, the data volume with the container disk is imported into persistent storage. Prerequisites You must have access credentials for the container registry that contains the container disk. Procedure If the container registry requires authentication, create a Secret manifest, specifying the credentials, and save it as a data-source-secret.yaml file: apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2 1 Specify the Base64-encoded key ID or user name. 2 Specify the Base64-encoded secret key or password. Apply the Secret manifest by running the following command: USD oc apply -f data-source-secret.yaml If the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM: USD oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2 1 Specify the config map name. 2 Specify the path to the CA certificate. Edit the VirtualMachine manifest and save it as a vm-fedora-datavolume.yaml file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: registry: url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" 5 secretRef: data-source-secret 6 certConfigMap: tls-certs 7 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {} 1 Specify the name of the VM. 2 Specify the name of the data volume. 3 Specify the size of the storage requested for the data volume. 4 Optional: If you do not specify a storage class, the default storage class is used. 5 Specify the URL of the container registry. 6 Optional: Specify the secret name if you created a secret for the container registry access credentials. 7 Optional: Specify a CA certificate config map. Create the VM by running the following command: USD oc create -f vm-fedora-datavolume.yaml The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded . You can start the VM. Data volume provisioning happens in the background, so there is no need to monitor the process. Verification The importer pod downloads the container disk from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command: USD oc get pods Monitor the data volume until its status is Succeeded by running the following command: USD oc describe dv fedora-dv 1 1 Specify the data volume name that you defined in the VirtualMachine manifest. Verify that provisioning is complete and that the VM has started by accessing its serial console: USD virtctl console vm-fedora-datavolume 7.2.3. Creating VMs by importing images from web pages You can create virtual machines (VMs) by importing operating system images from web pages. Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.2.3.1. Creating a VM from an image on a web page by using the web console You can create a virtual machine (VM) by importing an image from a web page by using the OpenShift Container Platform web console. Prerequisites You must have access to the web page that contains the image. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select URL (creates PVC) from the Disk source list. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 Set the disk size. Click . Click Create VirtualMachine . 7.2.3.2. Creating a VM from an image on a web page by using the command line You can create a virtual machine (VM) from an image on a web page by using the command line. When the virtual machine (VM) is created, the data volume with the image is imported into persistent storage. Prerequisites You must have access credentials for the web page that contains the image. Procedure If the web page requires authentication, create a Secret manifest, specifying the credentials, and save it as a data-source-secret.yaml file: apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: "" 1 secretKey: "" 2 1 Specify the Base64-encoded key ID or user name. 2 Specify the Base64-encoded secret key or password. Apply the Secret manifest by running the following command: USD oc apply -f data-source-secret.yaml If the VM must communicate with servers that use self-signed certificates or certificates that are not signed by the system CA bundle, create a config map in the same namespace as the VM: USD oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2 1 Specify the config map name. 2 Specify the path to the CA certificate. Edit the VirtualMachine manifest and save it as a vm-fedora-datavolume.yaml file: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: http: url: "https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2" 5 registry: url: "docker://kubevirt/fedora-cloud-container-disk-demo:latest" 6 secretRef: data-source-secret 7 certConfigMap: tls-certs 8 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: "" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {} 1 Specify the name of the VM. 2 Specify the name of the data volume. 3 Specify the size of the storage requested for the data volume. 4 Optional: If you do not specify a storage class, the default storage class is used. 5 6 Specify the URL of the web page. 7 Optional: Specify the secret name if you created a secret for the web page access credentials. 8 Optional: Specify a CA certificate config map. Create the VM by running the following command: USD oc create -f vm-fedora-datavolume.yaml The oc create command creates the data volume and the VM. The CDI controller creates an underlying PVC with the correct annotation and the import process begins. When the import is complete, the data volume status changes to Succeeded . You can start the VM. Data volume provisioning happens in the background, so there is no need to monitor the process. Verification The importer pod downloads the image from the specified URL and stores it on the provisioned persistent volume. View the status of the importer pod by running the following command: USD oc get pods Monitor the data volume until its status is Succeeded by running the following command: USD oc describe dv fedora-dv 1 1 Specify the data volume name that you defined in the VirtualMachine manifest. Verify that provisioning is complete and that the VM has started by accessing its serial console: USD virtctl console vm-fedora-datavolume 7.2.4. Creating VMs by uploading images You can create virtual machines (VMs) by uploading operating system images from your local machine. You can create a Windows VM by uploading a Windows image to a PVC. Then you clone the PVC when you create the VM. Important You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. You must also install VirtIO drivers on Windows VMs. 7.2.4.1. Creating a VM from an uploaded image by using the web console You can create a virtual machine (VM) from an uploaded operating system image by using the OpenShift Container Platform web console. Prerequisites You must have an IMG , ISO , or QCOW2 image file. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select Upload (Upload a new file to a PVC) from the Disk source list. Browse to the image on your local machine and set the disk size. Click Customize VirtualMachine . Click Create VirtualMachine . 7.2.4.2. Creating a Windows VM You can create a Windows virtual machine (VM) by uploading a Windows image to a persistent volume claim (PVC) and then cloning the PVC when you create a VM by using the OpenShift Container Platform web console. Prerequisites You created a Windows installation DVD or USB with the Windows Media Creation Tool. See Create Windows 10 installation media in the Microsoft documentation. You created an autounattend.xml answer file. See Answer files (unattend.xml) in the Microsoft documentation. Procedure Upload the Windows image as a new PVC: Navigate to Storage PersistentVolumeClaims in the web console. Click Create PersistentVolumeClaim With Data upload form . Browse to the Windows image and select it. Enter the PVC name, select the storage class and size and then click Upload . The Windows image is uploaded to a PVC. Configure a new VM by cloning the uploaded PVC: Navigate to Virtualization Catalog . Select a Windows template tile and click Customize VirtualMachine . Select Clone (clone PVC) from the Disk source list. Select the PVC project, the Windows image PVC, and the disk size. Apply the answer file to the VM: Click Customize VirtualMachine parameters . On the Sysprep section of the Scripts tab, click Edit . Browse to the autounattend.xml answer file and click Save . Set the run strategy of the VM: Clear Start this VirtualMachine after creation so that the VM does not start immediately. Click Create VirtualMachine . On the YAML tab, replace running:false with runStrategy: RerunOnFailure and click Save . Click the options menu and select Start . The VM boots from the sysprep disk containing the autounattend.xml answer file. 7.2.4.2.1. Generalizing a Windows VM image You can generalize a Windows operating system image to remove all system-specific configuration data before you use the image to create a new virtual machine (VM). Before generalizing the VM, you must ensure the sysprep tool cannot detect an answer file after the unattended Windows installation. Prerequisites A running Windows VM with the QEMU guest agent installed. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines . Select a Windows VM to open the VirtualMachine details page. Click Configuration Disks . Click the Options menu beside the sysprep disk and select Detach . Click Detach . Rename C:\Windows\Panther\unattend.xml to avoid detection by the sysprep tool. Start the sysprep program by running the following command: %WINDIR%\System32\Sysprep\sysprep.exe /generalize /shutdown /oobe /mode:vm After the sysprep tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs. You can now specialize the VM. 7.2.4.2.2. Specializing a Windows VM image Specializing a Windows virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM. Prerequisites You must have a generalized Windows disk image. You must create an unattend.xml answer file. See the Microsoft documentation for details. Procedure In the OpenShift Container Platform console, click Virtualization Catalog . Select a Windows template and click Customize VirtualMachine . Select PVC (clone PVC) from the Disk source list. Select the PVC project and PVC name of the generalized Windows image. Click Customize VirtualMachine parameters . Click the Scripts tab. In the Sysprep section, click Edit , browse to the unattend.xml answer file, and click Save . Click Create VirtualMachine . During the initial boot, Windows uses the unattend.xml answer file to specialize the VM. The VM is now ready to use. Additional resources for creating Windows VMs Microsoft, Sysprep (Generalize) a Windows installation Microsoft, generalize Microsoft, specialize 7.2.4.3. Creating a VM from an uploaded image by using the command line You can upload an operating system image by using the virtctl command line tool. You can use an existing data volume or create a new data volume for the image. Prerequisites You must have an ISO , IMG , or QCOW2 operating system image file. For best performance, compress the image file by using the virt-sparsify tool or the xz or gzip utilities. You must have virtctl installed. The client machine must be configured to trust the OpenShift Container Platform router's certificate. Procedure Upload the image by running the virtctl image-upload command: USD virtctl image-upload dv <datavolume_name> \ 1 --size=<datavolume_size> \ 2 --image-path=</path/to/image> \ 3 1 The name of the data volume. 2 The size of the data volume. For example: --size=500Mi , --size=1G 3 The file path of the image. Note If you do not want to create a new data volume, omit the --size parameter and include the --no-create flag. When uploading a disk image to a PVC, the PVC size must be larger than the size of the uncompressed virtual disk. To allow insecure server connections when using HTTPS, use the --insecure parameter. When you use the --insecure flag, the authenticity of the upload endpoint is not verified. Optional. To verify that a data volume was created, view all data volumes by running the following command: USD oc get dvs 7.2.5. Installing the QEMU guest agent and VirtIO drivers The QEMU guest agent is a daemon that runs on the virtual machine (VM) and passes information to the host about the VM, users, file systems, and secondary networks. You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. 7.2.5.1. Installing the QEMU guest agent 7.2.5.1.1. Installing the QEMU guest agent on a Linux VM The qemu-guest-agent is widely available and available by default in Red Hat Enterprise Linux (RHEL) virtual machines (VMs). Install the agent and start the service. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure Log in to the VM by using a console or SSH. Install the QEMU guest agent by running the following command: USD yum install -y qemu-guest-agent Ensure the service is persistent and start it: USD systemctl enable --now qemu-guest-agent Verification Run the following command to verify that AgentConnected is listed in the VM spec: USD oc get vm <vm_name> 7.2.5.1.2. Installing the QEMU guest agent on a Windows VM For Windows virtual machines (VMs), the QEMU guest agent is included in the VirtIO drivers. You can install the drivers during a Windows installation or on an existing Windows VM. Note To create snapshots of an online (Running state) VM with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system as much as possible, depending on the system workload. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. The conditions under which the snapshot was taken are reflected in the snapshot indications that are displayed in the web console or CLI. Procedure In the Windows guest operating system, use the File Explorer to navigate to the guest-agent directory in the virtio-win CD drive. Run the qemu-ga-x86_64.msi installer. Verification Obtain a list of network services by running the following command: USD net start Verify that the output contains the QEMU Guest Agent . 7.2.5.2. Installing VirtIO drivers on Windows VMs VirtIO drivers are paravirtualized device drivers required for Microsoft Windows virtual machines (VMs) to run in OpenShift Virtualization. The drivers are shipped with the rest of the images and do not require a separate download. The container-native-virtualization/virtio-win container disk must be attached to the VM as a SATA CD drive to enable driver installation. You can install VirtIO drivers during Windows installation or added to an existing Windows installation. After the drivers are installed, the container-native-virtualization/virtio-win container disk can be removed from the VM. Table 7.3. Supported drivers Driver name Hardware ID Description viostor VEN_1AF4&DEV_1001 VEN_1AF4&DEV_1042 The block driver. Sometimes labeled as an SCSI Controller in the Other devices group. viorng VEN_1AF4&DEV_1005 VEN_1AF4&DEV_1044 The entropy source driver. Sometimes labeled as a PCI Device in the Other devices group. NetKVM VEN_1AF4&DEV_1000 VEN_1AF4&DEV_1041 The network driver. Sometimes labeled as an Ethernet Controller in the Other devices group. Available only if a VirtIO NIC is configured. 7.2.5.2.1. Attaching VirtIO container disk to Windows VMs during installation You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done during creation of the VM. Procedure When creating a Windows VM from a template, click Customize VirtualMachine . Select Mount Windows drivers disk . Click the Customize VirtualMachine parameters . Click Create VirtualMachine . After the VM is created, the virtio-win SATA CD disk will be attached to the VM. 7.2.5.2.2. Attaching VirtIO container disk to an existing Windows VM You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done to an existing VM. Procedure Navigate to the existing Windows VM, and click Actions Stop . Go to VM Details Configuration Disks and click Add disk . Add windows-driver-disk from container source, set the Type to CD-ROM , and then set the Interface to SATA . Click Save . Start the VM, and connect to a graphical console. 7.2.5.2.3. Installing VirtIO drivers during Windows installation You can install the VirtIO drivers while installing Windows on a virtual machine (VM). Note This procedure uses a generic approach to the Windows installation and the installation method might differ between versions of Windows. See the documentation for the version of Windows that you are installing. Prerequisites A storage device containing the virtio drivers must be attached to the VM. Procedure In the Windows operating system, use the File Explorer to navigate to the virtio-win CD drive. Double-click the drive to run the appropriate installer for your VM. For a 64-bit vCPU, select the virtio-win-gt-x64 installer. 32-bit vCPUs are no longer supported. Optional: During the Custom Setup step of the installer, select the device drivers you want to install. The recommended driver set is selected by default. After the installation is complete, select Finish . Reboot the VM. Verification Open the system disk on the PC. This is typically C: . Navigate to Program Files Virtio-Win . If the Virtio-Win directory is present and contains a sub-directory for each driver, the installation was successful. 7.2.5.2.4. Installing VirtIO drivers from a SATA CD drive on an existing Windows VM You can install the VirtIO drivers from a SATA CD drive on an existing Windows virtual machine (VM). Note This procedure uses a generic approach to adding drivers to Windows. See the installation documentation for your version of Windows for specific installation steps. Prerequisites A storage device containing the virtio drivers must be attached to the VM as a SATA CD drive. Procedure Start the VM and connect to a graphical console. Log in to a Windows user session. Open Device Manager and expand Other devices to list any Unknown device . Open the Device Properties to identify the unknown device. Right-click the device and select Properties . Click the Details tab and select Hardware Ids in the Property list. Compare the Value for the Hardware Ids with the supported VirtIO drivers. Right-click the device and select Update Driver Software . Click Browse my computer for driver software and browse to the attached SATA CD drive, where the VirtIO drivers are located. The drivers are arranged hierarchically according to their driver type, operating system, and CPU architecture. Click to install the driver. Repeat this process for all the necessary VirtIO drivers. After the driver installs, click Close to close the window. Reboot the VM to complete the driver installation. 7.2.5.2.5. Installing VirtIO drivers from a container disk added as a SATA CD drive You can install VirtIO drivers from a container disk that you add to a Windows virtual machine (VM) as a SATA CD drive. Tip Downloading the container-native-virtualization/virtio-win container disk from the Red Hat Ecosystem Catalog is not mandatory, because the container disk is downloaded from the Red Hat registry if it not already present in the cluster. However, downloading reduces the installation time. Prerequisites You must have access to the Red Hat registry or to the downloaded container-native-virtualization/virtio-win container disk in a restricted environment. Procedure Add the container-native-virtualization/virtio-win container disk as a CD drive by editing the VirtualMachine manifest: # ... spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk 1 OpenShift Virtualization boots the VM disks in the order defined in the VirtualMachine manifest. You can either define other VM disks that boot before the container-native-virtualization/virtio-win container disk or use the optional bootOrder parameter to ensure the VM boots from the correct disk. If you configure the boot order for a disk, you must configure the boot order for the other disks. Apply the changes: If the VM is not running, run the following command: USD virtctl start <vm> -n <namespace> If the VM is running, reboot the VM or run the following command: USD oc apply -f <vm.yaml> After the VM has started, install the VirtIO drivers from the SATA CD drive. 7.2.5.3. Updating VirtIO drivers 7.2.5.3.1. Updating VirtIO drivers on a Windows VM Update the virtio drivers on a Windows virtual machine (VM) by using the Windows Update service. Prerequisites The cluster must be connected to the internet. Disconnected clusters cannot reach the Windows Update service. Procedure In the Windows Guest operating system, click the Windows key and select Settings . Navigate to Windows Update Advanced Options Optional Updates . Install all updates from Red Hat, Inc. . Reboot the VM. Verification On the Windows VM, navigate to the Device Manager . Select a device. Select the Driver tab. Click Driver Details and confirm that the virtio driver details displays the correct version. 7.2.6. Cloning VMs You can clone virtual machines (VMs) or create new VMs from snapshots. Important Cloning of a VM with a vTPM device attached to it is not supported. 7.2.6.1. Cloning a VM by using the web console You can clone an existing VM by using the web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Click Actions . Select Clone . On the Clone VirtualMachine page, enter the name of the new VM. (Optional) Select the Start cloned VM checkbox to start the cloned VM. Click Clone . 7.2.6.2. Creating a VM from an existing snapshot by using the web console You can create a new VM by copying an existing snapshot. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Click the Snapshots tab. Click the actions menu for the snapshot you want to copy. Select Create VirtualMachine . Enter the name of the virtual machine. (Optional) Select the Start this VirtualMachine after creation checkbox to start the new virtual machine. Click Create . 7.2.6.3. Additional resources Creating VMs by cloning PVCs 7.2.7. Creating VMs by cloning PVCs You can create virtual machines (VMs) by cloning existing persistent volume claims (PVCs) with custom images. You must install the QEMU guest agent on VMs created from operating system images that are not provided by Red Hat. You clone a PVC by creating a data volume that references a source PVC. 7.2.7.1. About cloning When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the following Container Storage Interface (CSI) clone methods: CSI volume cloning Smart cloning Both CSI volume cloning and smart cloning methods are efficient, but they have certain requirements for use. If the requirements are not met, the CDI uses host-assisted cloning. Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods. 7.2.7.1.1. CSI volume cloning Container Storage Interface (CSI) cloning uses CSI driver features to more efficiently clone a source data volume. CSI volume cloning has the following requirements: The CSI driver that backs the storage class of the persistent volume claim (PVC) must support volume cloning. For provisioners not recognized by the CDI, the corresponding storage profile must have the cloneStrategy set to CSI Volume Cloning. The source and target PVCs must have the same storage class and volume mode. If you create the data volume, you must have permission to create the datavolumes/source resource in the source namespace. The source volume must not be in use. 7.2.7.1.2. Smart cloning When a Container Storage Interface (CSI) plugin with snapshot capabilities is available, the Containerized Data Importer (CDI) creates a persistent volume claim (PVC) from a snapshot, which then allows efficient cloning of additional PVCs. Smart cloning has the following requirements: A snapshot class associated with the storage class must exist. The source and target PVCs must have the same storage class and volume mode. If you create the data volume, you must have permission to create the datavolumes/source resource in the source namespace. The source volume must not be in use. 7.2.7.1.3. Host-assisted cloning When the requirements for neither Container Storage Interface (CSI) volume cloning nor smart cloning have been met, host-assisted cloning is used as a fallback method. Host-assisted cloning is less efficient than either of the two other cloning methods. Host-assisted cloning uses a source pod and a target pod to copy data from the source volume to the target volume. The target persistent volume claim (PVC) is annotated with the fallback reason that explains why host-assisted cloning has been used, and an event is created. Example PVC target annotation apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible cdi.kubevirt.io/clonePhase: Succeeded cdi.kubevirt.io/cloneType: copy Example event NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible 7.2.7.2. Creating a VM from a PVC by using the web console You can create a virtual machine (VM) by importing an image from a web page by using the OpenShift Container Platform web console. You can create a virtual machine (VM) by cloning a persistent volume claim (PVC) by using the OpenShift Container Platform web console. Prerequisites You must have access to the web page that contains the image. You must have access to the namespace that contains the source PVC. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile without an available boot source. Click Customize VirtualMachine . On the Customize template parameters page, expand Storage and select PVC (clone PVC) from the Disk source list. Enter the image URL. Example: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software Enter the container image URL. Example: https://mirror.arizona.edu/fedora/linux/releases/38/Cloud/x86_64/images/Fedora-Cloud-Base-38-1.6.x86_64.qcow2 Select the PVC project and the PVC name. Set the disk size. Click . Click Create VirtualMachine . 7.2.7.3. Creating a VM from a PVC by using the command line You can create a virtual machine (VM) by cloning the persistent volume claim (PVC) of an existing VM by using the command line. You can clone a PVC by using one of the following options: Cloning a PVC to a new data volume. This method creates a data volume whose lifecycle is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC. Cloning a PVC by creating a VirtualMachine manifest with a dataVolumeTemplates stanza. This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC. 7.2.7.3.1. Cloning a PVC to a data volume You can clone the persistent volume claim (PVC) of an existing virtual machine (VM) disk to a data volume by using the command line. You create a data volume that references the original source PVC. The lifecycle of the new data volume is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC. Cloning between different volume modes is supported for host-assisted cloning, such as cloning from a block persistent volume (PV) to a file system PV, as long as the source and target PVs belong to the kubevirt content type. Note Smart-cloning is faster and more efficient than host-assisted cloning because it uses snapshots to clone PVCs. Smart-cloning is supported by storage providers that support snapshots, such as Red Hat OpenShift Data Foundation. Cloning between different volume modes is not supported for smart-cloning. Prerequisites The VM with the source PVC must be powered down. If you clone a PVC to a different namespace, you must have permissions to create resources in the target namespace. Additional prerequisites for smart-cloning: Your storage provider must support snapshots. The source and target PVCs must have the same storage provider and volume mode. The value of the driver key of the VolumeSnapshotClass object must match the value of the provisioner key of the StorageClass object as shown in the following example: Example VolumeSnapshotClass object kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com # ... Example StorageClass object kind: StorageClass apiVersion: storage.k8s.io/v1 # ... provisioner: openshift-storage.rbd.csi.ceph.com Procedure Create a DataVolume manifest as shown in the following example: apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: "<source_namespace>" 2 name: "<my_vm_disk>" 3 storage: {} 1 Specify the name of the new data volume. 2 Specify the namespace of the source PVC. 3 Specify the name of the source PVC. Create the data volume by running the following command: USD oc create -f <datavolume>.yaml Note Data volumes prevent a VM from starting before the PVC is prepared. You can create a VM that references the new data volume while the PVC is being cloned. 7.2.7.3.2. Creating a VM from a cloned PVC by using a data volume template You can create a virtual machine (VM) that clones the persistent volume claim (PVC) of an existing VM by using a data volume template. This method creates a data volume whose lifecycle is dependent on the original VM. Deleting the original VM deletes the cloned data volume and its associated PVC. Prerequisites The VM with the source PVC must be powered down. Procedure Create a VirtualMachine manifest as shown in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: "<source_pvc>" 3 1 Specify the name of the VM. 2 Specify the namespace of the source PVC. 3 Specify the name of the source PVC. Create the virtual machine with the PVC-cloned data volume: USD oc create -f <vm-clone-datavolumetemplate>.yaml 7.3. Connecting to virtual machine consoles You can connect to the following consoles to access running virtual machines (VMs): VNC console Serial console Desktop viewer for Windows VMs 7.3.1. Connecting to the VNC console You can connect to the VNC console of a virtual machine by using the OpenShift Container Platform web console or the virtctl command line tool. 7.3.1.1. Connecting to the VNC console by using the web console You can connect to the VNC console of a virtual machine (VM) by using the OpenShift Container Platform web console. Note If you connect to a Windows VM with a vGPU assigned as a mediated device, you can switch between the default display and the vGPU display. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Optional: To switch to the vGPU display of a Windows VM, select Ctl + Alt + 2 from the Send key list. Select Ctl + Alt + 1 from the Send key list to restore the default display. To end the console session, click outside the console pane and then click Disconnect . 7.3.1.2. Connecting to the VNC console by using virtctl You can use the virtctl command line tool to connect to the VNC console of a running virtual machine. Note If you run the virtctl vnc command on a remote machine over an SSH connection, you must forward the X session to your local machine by running the ssh command with the -X or -Y flags. Prerequisites You must install the virt-viewer package. Procedure Run the following command to start the console session: USD virtctl vnc <vm_name> If the connection fails, run the following command to collect troubleshooting information: USD virtctl vnc <vm_name> -v 4 7.3.1.3. Generating a temporary token for the VNC console To access the VNC of a virtual machine (VM), generate a temporary authentication bearer token for the Kubernetes API. Note Kubernetes also supports authentication using client certificates, instead of a bearer token, by modifying the curl command. Prerequisites A running VM with OpenShift Virtualization 4.14 or later. You have installed Scheduling, Scale, and Performance (SSP) Operator 4.14 or later. For more information, see "About the Scheduling, Scale, and Performance (SSP) Operator". Procedure Enable the feature gate in the HyperConverged ( HCO ) custom resource (CR): USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{"op": "replace", "path": "/spec/featureGates/deployVmConsoleProxy", "value": true}]' Generate a token by entering the following command: USD curl --header "Authorization: Bearer USD{TOKEN}" \ "https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>" The <duration> parameter can be set in hours and minutes, with a minimum duration of 10 minutes. For example: 5h30m . If this parameter is not set, the token is valid for 10 minutes by default. Sample output: { "token": "eyJhb..." } Optional: Use the token provided in the output to create a variable: USD export VNC_TOKEN="<token>" You can now use the token to access the VNC console of a VM. Verification Log in to the cluster by entering the following command: USD oc login --token USD{VNC_TOKEN} Test access to the VNC console of the VM by using the virtctl command: USD virtctl vnc <vm_name> -n <namespace> Warning It is currently not possible to revoke a specific token. To revoke a token, you must delete the service account that was used to create it. However, this also revokes all other tokens that were created by using the service account. Use the following command with caution: USD virtctl delete serviceaccount --namespace "<namespace>" "<vm_name>-vnc-access" 7.3.2. Connecting to the serial console You can connect to the serial console of a virtual machine by using the OpenShift Container Platform web console or the virtctl command line tool. Note Running concurrent VNC connections to a single virtual machine is not currently supported. 7.3.2.1. Connecting to the serial console by using the web console You can connect to the serial console of a virtual machine (VM) by using the OpenShift Container Platform web console. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background. Select Serial console from the console list. To end the console session, click outside the console pane and then click Disconnect . 7.3.2.2. Connecting to the serial console by using virtctl You can use the virtctl command line tool to connect to the serial console of a running virtual machine. Procedure Run the following command to start the console session: USD virtctl console <vm_name> Press Ctrl+] to end the console session. 7.3.3. Connecting to the desktop viewer You can connect to a Windows virtual machine (VM) by using the desktop viewer and the Remote Desktop Protocol (RDP). 7.3.3.1. Connecting to the desktop viewer by using the web console You can connect to the desktop viewer of a Windows virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You installed the QEMU guest agent on the Windows VM. You have an RDP client installed. Procedure On the Virtualization VirtualMachines page, click a VM to open the VirtualMachine details page. Click the Console tab. The VNC console session starts automatically. Click Disconnect to end the VNC console session. Otherwise, the VNC console session continues to run in the background. Select Desktop viewer from the console list. Click Create RDP Service to open the RDP Service dialog. Select Expose RDP Service and click Save to create a node port service. Click Launch Remote Desktop to download an .rdp file and launch the desktop viewer. 7.3.4. Additional resources About the Scheduling, Scale, and Performance (SSP) Operator 7.4. Configuring SSH access to virtual machines You can configure SSH access to virtual machines (VMs) by using the following methods: virtctl ssh command You create an SSH key pair, add the public key to a VM, and connect to the VM by running the virtctl ssh command with the private key. You can add public SSH keys to Red Hat Enterprise Linux (RHEL) 9 VMs at runtime or at first boot to VMs with guest operating systems that can be configured by using a cloud-init data source. virtctl port-forward command You add the virtctl port-foward command to your .ssh/config file and connect to the VM by using OpenSSH. Service You create a service, associate the service with the VM, and connect to the IP address and port exposed by the service. Secondary network You configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address. 7.4.1. Access configuration considerations Each method for configuring access to a virtual machine (VM) has advantages and limitations, depending on the traffic load and client requirements. Services provide excellent performance and are recommended for applications that are accessed from outside the cluster. If the internal cluster network cannot handle the traffic load, you can configure a secondary network. virtctl ssh and virtctl port-forwarding commands Simple to configure. Recommended for troubleshooting VMs. virtctl port-forwarding recommended for automated configuration of VMs with Ansible. Dynamic public SSH keys can be used to provision VMs with Ansible. Not recommended for high-traffic applications like Rsync or Remote Desktop Protocol because of the burden on the API server. The API server must be able to handle the traffic load. The clients must be able to access the API server. The clients must have access credentials for the cluster. Cluster IP service The internal cluster network must be able to handle the traffic load. The clients must be able to access an internal cluster IP address. Node port service The internal cluster network must be able to handle the traffic load. The clients must be able to access at least one node. Load balancer service A load balancer must be configured. Each node must be able to handle the traffic load of one or more load balancer services. Secondary network Excellent performance because traffic does not go through the internal cluster network. Allows a flexible approach to network topology. Guest operating system must be configured with appropriate security because the VM is exposed directly to the secondary network. If a VM is compromised, an intruder could gain access to the secondary network. 7.4.2. Using virtctl ssh You can add a public SSH key to a virtual machine (VM) and connect to the VM by running the virtctl ssh command. This method is simple to configure. However, it is not recommended for high traffic loads because it places a burden on the API server. 7.4.2.1. About static and dynamic SSH key management You can add public SSH keys to virtual machines (VMs) statically at first boot or dynamically at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. Static SSH key management You can add a statically managed SSH key to a VM with a guest operating system that supports configuration by using a cloud-init data source. The key is added to the virtual machine (VM) at first boot. You can add the key by using one of the following methods: Add a key to a single VM when you create it by using the web console or the command line. Add a key to a project by using the web console. Afterwards, the key is automatically added to the VMs that you create in this project. Use cases As a VM owner, you can provision all your newly created VMs with a single key. Dynamic SSH key management You can enable dynamic SSH key management for a VM with Red Hat Enterprise Linux (RHEL) 9 installed. Afterwards, you can update the key during runtime. The key is added by the QEMU guest agent, which is installed with Red Hat boot sources. When dynamic key management is disabled, the default key management setting of a VM is determined by the image used for the VM. Use cases Granting or revoking access to VMs: As a cluster administrator, you can grant or revoke remote VM access by adding or removing the keys of individual users from a Secret object that is applied to all VMs in a namespace. User access: You can add your access credentials to all VMs that you create and manage. Ansible provisioning: As an operations team member, you can create a single secret that contains all the keys used for Ansible provisioning. As a VM owner, you can create a VM and attach the keys used for Ansible provisioning. Key rotation: As a cluster administrator, you can rotate the Ansible provisioner keys used by VMs in a namespace. As a workload owner, you can rotate the key for the VMs that you manage. 7.4.2.2. Static key management You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. The key is added as a cloud-init data source when the VM boots for the first time. You can also add a public SSH key to a project when you create a VM by using the web console. The key is saved as a secret and is added automatically to all VMs that you create. Note If you add a secret to a project and then delete the VM, the secret is retained because it is a namespace resource. You must delete the secret manually. 7.4.2.2.1. Adding a key when creating a VM from a template You can add a statically managed public SSH key when you create a virtual machine (VM) by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data. Optional: You can add a key to a project. Afterwards, this key is added automatically to VMs that you create in the project. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Navigate to Virtualization Catalog in the web console. Click a template tile. The guest operating system must support configuration from a cloud-init data source. Click Customize VirtualMachine . Click . Click the Scripts tab. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . Click Create VirtualMachine . The VirtualMachine details page displays the progress of the VM creation. Verification Click the Scripts tab on the Configuration tab. The secret name is displayed in the Authorized SSH key section. 7.4.2.2.2. Adding a key when creating a VM from an instance type by using the web console You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM. You can add a statically managed SSH key when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data. Procedure In the web console, navigate to Virtualization Catalog and click the InstanceTypes tab. Select either of the following options: Select a bootable volume. Note The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list. Click Add volume to upload a new volume or use an existing persistent volume claim (PVC), volume snapshot, or data source. Then click Save . Click an instance type tile and select the resource size appropriate for your workload. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 7.4.2.2.3. Adding a key when creating a VM by using the command line You can add a statically managed public SSH key when you create a virtual machine (VM) by using the command line. The key is added to the VM at first boot. The key is added to the VM as a cloud-init data source. This method separates the access credentials from the application data in the cloud-init user data. This method does not affect cloud-init user data. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Create a manifest file for a VirtualMachine object and a Secret object: Example manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3 1 Specify the cloudInitNoCloud data source. 2 Specify the Secret object name. 3 Paste the public SSH key. Create the VirtualMachine and Secret objects by running the following command: USD oc create -f <manifest_file>.yaml Start the VM by running the following command: USD virtctl start vm example-vm -n example-namespace Verification Get the VM configuration: USD oc describe vm example-vm -n example-namespace Example output apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys # ... 7.4.2.3. Dynamic key management You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console or the command line. Then, you can update the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. If you disable dynamic key injection, the VM inherits the key management method of the image from which it was created. 7.4.2.3.1. Enabling dynamic key injection when creating a VM from a template You can enable dynamic public SSH key injection when you create a virtual machine (VM) from a template by using the OpenShift Container Platform web console. Then, you can update the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Navigate to Virtualization Catalog in the web console. Click the Red Hat Enterprise Linux 9 VM tile. Click Customize VirtualMachine . Click . Click the Scripts tab. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Set Dynamic SSH key injection to on. Click Save . Click Create VirtualMachine . The VirtualMachine details page displays the progress of the VM creation. Verification Click the Scripts tab on the Configuration tab. The secret name is displayed in the Authorized SSH key section. 7.4.2.3.2. Enabling dynamic key injection when creating a VM from an instance type by using the web console You can create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM. You can enable dynamic SSH key injection when you create a virtual machine (VM) from an instance type by using the OpenShift Container Platform web console. Then, you can add or revoke the key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed with RHEL 9. Procedure In the web console, navigate to Virtualization Catalog and click the InstanceTypes tab. Select either of the following options: Select a bootable volume. Note The bootable volume table lists only those volumes in the openshift-virtualization-os-images namespace that have the instancetype.kubevirt.io/default-preference label. Optional: Click the star icon to designate a bootable volume as a favorite. Starred bootable volumes appear first in the volume list. Click Add volume to upload a new volume or use an existing persistent volume claim (PVC), volume snapshot, or data source. Then click Save . Click an instance type tile and select the resource size appropriate for your workload. Click the Red Hat Enterprise Linux 9 VM tile. If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key in the VirtualMachine details section. Select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the public SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Click Save . Set Dynamic SSH key injection in the VirtualMachine details section to on. Optional: Click View YAML & CLI to view the YAML file. Click CLI to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. Click Create VirtualMachine . After the VM is created, you can monitor the status on the VirtualMachine details page. 7.4.2.3.3. Enabling dynamic SSH key injection by using the web console You can enable dynamic key injection for a virtual machine (VM) by using the OpenShift Container Platform web console. Then, you can update the public SSH key at runtime. The key is added to the VM by the QEMU guest agent, which is installed with Red Hat Enterprise Linux (RHEL) 9. Prerequisites The guest operating system is RHEL 9. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. On the Configuration tab, click Scripts . If you have not already added a public SSH key to your project, click the edit icon beside Authorized SSH key and select one of the following options: Use existing : Select a secret from the secrets list. Add new : Browse to the SSH key file or paste the file in the key field. Enter the secret name. Optional: Select Automatically apply this key to any new VirtualMachine you create in this project . Set Dynamic SSH key injection to on. Click Save . 7.4.2.3.4. Enabling dynamic key injection by using the command line You can enable dynamic key injection for a virtual machine (VM) by using the command line. Then, you can update the public SSH key at runtime. Note Only Red Hat Enterprise Linux (RHEL) 9 supports dynamic key injection. The key is added to the VM by the QEMU guest agent, which is installed automatically with RHEL 9. Prerequisites You generated an SSH key pair by running the ssh-keygen command. Procedure Create a manifest file for a VirtualMachine object and a Secret object: Example manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["cloud-user"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3 1 Specify the cloudInitNoCloud data source. 2 Specify the Secret object name. 3 Paste the public SSH key. Create the VirtualMachine and Secret objects by running the following command: USD oc create -f <manifest_file>.yaml Start the VM by running the following command: USD virtctl start vm example-vm -n example-namespace Verification Get the VM configuration: USD oc describe vm example-vm -n example-namespace Example output apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: ["cloud-user"] source: secret: secretName: authorized-keys # ... 7.4.2.4. Using the virtctl ssh command You can access a running virtual machine (VM) by using the virtcl ssh command. Prerequisites You installed the virtctl command line tool. You added a public SSH key to the VM. You have an SSH client installed. The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Run the virtctl ssh command: USD virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1 1 Specify the namespace, user name, and the SSH private key. The default SSH key location is /home/user/.ssh . If you save the key in a different location, you must specify the path. Example USD virtctl -n my-namespace ssh cloud-user@example-vm -i my-key Tip You can copy the virtctl ssh command in the web console by selecting Copy SSH command from the options menu beside a VM on the VirtualMachines page . 7.4.3. Using the virtctl port-forward command You can use your local OpenSSH client and the virtctl port-forward command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs. This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server. Prerequisites You have installed the virtctl client. The virtual machine you want to access is running. The environment where you installed the virtctl tool has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Add the following text to the ~/.ssh/config file on your client machine: Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p Connect to the VM by running the following command: USD ssh <user>@vm/<vm_name>.<namespace> 7.4.4. Using a service for SSH access You can create a service for a virtual machine (VM) and connect to the IP address and port exposed by the service. Services provide excellent performance and are recommended for applications that are accessed from outside the cluster or within the cluster. Ingress traffic is protected by firewalls. If the cluster network cannot handle the traffic load, consider using a secondary network for VM access. 7.4.4.1. About services A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the NodePort and LoadBalancer types, exposure to the outside world. ClusterIP Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client's request is load balanced among available backends. ClusterIP is the default service type. NodePort Exposes the service on the same port of each selected node in the cluster. NodePort makes a port accessible from outside the cluster, as long as the node itself is externally accessible to the client. LoadBalancer Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP address to the service. Note For on-premise clusters, you can configure a load-balancing service by deploying the MetalLB Operator. 7.4.4.2. Creating a service You can create a service to expose a virtual machine (VM) by using the OpenShift Container Platform web console, virtctl command line tool, or a YAML file. 7.4.4.2.1. Enabling load balancer service creation by using the web console You can enable the creation of load balancer services for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You have configured a load balancer for the cluster. You are logged in as a user with the cluster-admin role. You created a network attachment definition for the network. Procedure Navigate to Virtualization Overview . On the Settings tab, click Cluster . Expand General settings and SSH configuration . Set SSH over LoadBalancer service to on. 7.4.4.2.2. Creating a service by using the web console You can create a node port or load balancer service for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You configured the cluster network to support either a load balancer or a node port. To create a load balancer service, you enabled the creation of load balancer services. Procedure Navigate to VirtualMachines and select a virtual machine to view the VirtualMachine details page. On the Details tab, select SSH over LoadBalancer from the SSH service type list. Optional: Click the copy icon to copy the SSH command to your clipboard. Verification Check the Services pane on the Details tab to view the new service. 7.4.4.2.3. Creating a service by using virtctl You can create a service for a virtual machine (VM) by using the virtctl command line tool. Prerequisites You installed the virtctl command line tool. You configured the cluster network to support the service. The environment where you installed virtctl has the cluster permissions required to access the VM. For example, you ran oc login or you set the KUBECONFIG environment variable. Procedure Create a service by running the following command: USD virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1 1 Specify the ClusterIP , NodePort , or LoadBalancer service type. Example USD virtctl expose vm example-vm --name example-service --type NodePort --port 22 Verification Verify the service by running the following command: USD oc get service steps After you create a service with virtctl , you must add special: key to the spec.template.metadata.labels stanza of the VirtualMachine manifest. See Creating a service by using the command line . 7.4.4.2.4. Creating a service by using the command line You can create a service and associate it with a virtual machine (VM) by using the command line. Prerequisites You configured the cluster network to support the service. Procedure Edit the VirtualMachine manifest to add the label for service creation: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 # ... 1 Add special: key to the spec.template.metadata.labels stanza. Note Labels on a virtual machine are passed through to the pod. The special: key label must match the label in the spec.selector attribute of the Service manifest. Save the VirtualMachine manifest file to apply your changes. Create a Service manifest to expose the VM: apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: # ... selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000 1 Specify the label that you added to the spec.template.metadata.labels stanza of the VirtualMachine manifest. 2 Specify ClusterIP , NodePort , or LoadBalancer . 3 Specifies a collection of network ports and protocols that you want to expose from the virtual machine. Save the Service manifest file. Create the service by running the following command: USD oc create -f example-service.yaml Restart the VM to apply the changes. Verification Query the Service object to verify that it is available: USD oc get service -n example-namespace 7.4.4.3. Connecting to a VM exposed by a service by using SSH You can connect to a virtual machine (VM) that is exposed by a service by using SSH. Prerequisites You created a service to expose the VM. You have an SSH client installed. You are logged in to the cluster. Procedure Run the following command to access the VM: USD ssh <user_name>@<ip_address> -p <port> 1 1 Specify the cluster IP for a cluster IP service, the node IP for a node port service, or the external IP address for a load balancer service. 7.4.5. Using a secondary network for SSH access You can configure a secondary network, attach a virtual machine (VM) to the secondary network interface, and connect to the DHCP-allocated IP address by using SSH. Important Secondary networks provide excellent performance because the traffic is not handled by the cluster network stack. However, the VMs are exposed directly to the secondary network and are not protected by firewalls. If a VM is compromised, an intruder could gain access to the secondary network. You must configure appropriate security within the operating system of the VM if you use this method. See the Multus and SR-IOV documentation in the OpenShift Virtualization Tuning & Scaling Guide for additional information about networking options. Prerequisites You configured a secondary network such as Linux bridge or SR-IOV . You created a network attachment definition for a Linux bridge network or the SR-IOV Network Operator created a network attachment definition when you created an SriovNetwork object. 7.4.5.1. Configuring a VM network interface by using the web console You can configure a network interface for a virtual machine (VM) by using the OpenShift Container Platform web console. Prerequisites You created a network attachment definition for the network. Procedure Navigate to Virtualization VirtualMachines . Click a VM to view the VirtualMachine details page. On the Configuration tab, click the Network interfaces tab. Click Add network interface . Enter the interface name and select the network attachment definition from the Network list. Click Save . Restart the VM to apply the changes. 7.4.5.2. Connecting to a VM attached to a secondary network by using SSH You can connect to a virtual machine (VM) attached to a secondary network by using SSH. Prerequisites You attached a VM to a secondary network with a DHCP server. You have an SSH client installed. Procedure Obtain the IP address of the VM by running the following command: USD oc describe vm <vm_name> -n <namespace> Example output Connect to the VM by running the following command: USD ssh <user_name>@<ip_address> -i <ssh_key> Example USD ssh [email protected] -i ~/.ssh/id_rsa_cloud-user Note You can also access a VM attached to a secondary network interface by using the cluster FQDN . 7.5. Editing virtual machines You can update a virtual machine (VM) configuration by using the OpenShift Container Platform web console. You can update the YAML file or the VirtualMachine details page . You can also edit a VM by using the command line. To edit a VM to configure disk sharing by using virtual disks or LUN, see Configuring shared volumes for virtual machines . 7.5.1. Editing a virtual machine by using the command line You can edit a virtual machine (VM) by using the command line. Prerequisites You installed the oc CLI. Procedure Obtain the virtual machine configuration by running the following command: USD oc edit vm <vm_name> Edit the YAML configuration. If you edit a running virtual machine, you need to do one of the following: Restart the virtual machine. Run the following command for the new configuration to take effect: USD oc apply vm <vm_name> -n <namespace> 7.5.2. Adding a disk to a virtual machine You can add a virtual disk to a virtual machine (VM) by using the OpenShift Container Platform web console. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. On the Disks tab, click Add disk . Specify the Source , Name , Size , Type , Interface , and Storage Class . Optional: You can enable preallocation if you use a blank disk source and require maximum write performance when creating data volumes. To do so, select the Enable preallocation checkbox. Optional: You can clear Apply optimized StorageProfile settings to change the Volume Mode and Access Mode for the virtual disk. If you do not specify these parameters, the system uses the default values from the kubevirt-storage-class-defaults config map. Click Add . Note If the VM is running, you must restart the VM to apply the change. 7.5.2.1. Storage fields Field Description Blank (creates PVC) Create an empty disk. Import via URL (creates PVC) Import content via URL (HTTP or HTTPS endpoint). Use an existing PVC Use a PVC that is already available in the cluster. Clone existing PVC (creates PVC) Select an existing PVC available in the cluster and clone it. Import via Registry (creates PVC) Import content via container registry. Container (ephemeral) Upload content from a container located in a registry accessible from the cluster. The container disk should be used only for read-only filesystems such as CD-ROMs or temporary virtual machines. Name Name of the disk. The name can contain lowercase letters ( a-z ), numbers ( 0-9 ), hyphens ( - ), and periods ( . ), up to a maximum of 253 characters. The first and last characters must be alphanumeric. The name must not contain uppercase letters, spaces, or special characters. Size Size of the disk in GiB. Type Type of disk. Example: Disk or CD-ROM Interface Type of disk device. Supported interfaces are virtIO , SATA , and SCSI . Storage Class The storage class that is used to create the disk. Advanced storage settings The following advanced storage settings are optional and available for Blank , Import via URL , and Clone existing PVC disks. If you do not specify these parameters, the system uses the default storage profile values. Parameter Option Parameter description Volume Mode Filesystem Stores the virtual disk on a file system-based volume. Block Stores the virtual disk directly on the block volume. Only use Block if the underlying storage supports it. Access Mode ReadWriteOnce (RWO) Volume can be mounted as read-write by a single node. ReadWriteMany (RWX) Volume can be mounted as read-write by many nodes at one time. Note This mode is required for live migration. 7.5.3. Adding a secret, config map, or service account to a virtual machine You add a secret, config map, or service account to a virtual machine by using the OpenShift Container Platform web console. These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk. If the virtual machine is running, changes do not take effect until you restart the virtual machine. The newly added resources are marked as pending changes at the top of the page. Prerequisites The secret, config map, or service account that you want to add must exist in the same namespace as the target virtual machine. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click Configuration Environment . Click Add Config Map, Secret or Service Account . Click Select a resource and select a resource from the list. A six character serial number is automatically generated for the selected resource. Optional: Click Reload to revert the environment to its last saved state. Click Save . Verification On the VirtualMachine details page, click Configuration Disks and verify that the resource is displayed in the list of disks. Restart the virtual machine by clicking Actions Restart . You can now mount the secret, config map, or service account as you would mount any other disk. Additional resources for config maps, secrets, and service accounts Understanding config maps Providing sensitive data to pods Understanding and creating service accounts 7.6. Editing boot order You can update the values for a boot order list by using the web console or the CLI. With Boot Order in the Virtual Machine Overview page, you can: Select a disk or network interface controller (NIC) and add it to the boot order list. Edit the order of the disks or NICs in the boot order list. Remove a disk or NIC from the boot order list, and return it back to the inventory of bootable sources. 7.6.1. Adding items to a boot order list in the web console Add items to a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . If a YAML configuration does not exist, or if this is the first time that you are creating a boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Click Add Source and select a bootable disk or network interface controller (NIC) for the virtual machine. Add any additional disks or NICs to the boot order list. Click Save . Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 7.6.2. Editing a boot order list in the web console Edit the boot order list in the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Choose the appropriate method to move the item in the boot order list: If you do not use a screen reader, hover over the arrow icon to the item that you want to move, drag the item up or down, and drop it in a location of your choice. If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the Tab key to drop the item in a location of your choice. Click Save . Note If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 7.6.3. Editing a boot order list in the YAML configuration file Edit the boot order list in a YAML configuration file by using the CLI. Procedure Open the YAML configuration file for the virtual machine by running the following command: USD oc edit vm <vm_name> -n <namespace> Edit the YAML file and modify the values for the boot order associated with a disk or network interface controller (NIC). For example: disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default 1 The boot order value specified for the disk. 2 The boot order value specified for the network interface controller. Save the YAML file. 7.6.4. Removing items from a boot order list in the web console Remove items from a boot order list by using the web console. Procedure Click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. Click the Details tab. Click the pencil icon that is located on the right side of Boot Order . Click the Remove icon to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: No resource selected. VM will attempt to boot from disks by order of appearance in YAML file. Note If the virtual machine is running, changes to Boot Order will not take effect until you restart the virtual machine. You can view pending changes by clicking View Pending Changes on the right side of the Boot Order field. The Pending Changes banner at the top of the page displays a list of all changes that will be applied when the virtual machine restarts. 7.7. Deleting virtual machines You can delete a virtual machine from the web console or by using the oc command line interface. 7.7.1. Deleting a virtual machine using the web console Deleting a virtual machine permanently removes it from the cluster. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Click the Options menu beside a virtual machine and select Delete . Alternatively, click the virtual machine name to open the VirtualMachine details page and click Actions Delete . Optional: Select With grace period or clear Delete disks . Click Delete to permanently delete the virtual machine. 7.7.2. Deleting a virtual machine by using the CLI You can delete a virtual machine by using the oc command line interface (CLI). The oc client enables you to perform actions on multiple virtual machines. Prerequisites Identify the name of the virtual machine that you want to delete. Procedure Delete the virtual machine by running the following command: USD oc delete vm <vm_name> Note This command only deletes a VM in the current project. Specify the -n <project_name> option if the VM you want to delete is in a different project or namespace. 7.8. Exporting virtual machines You can export a virtual machine (VM) and its associated disks in order to import a VM into another cluster or to analyze the volume for forensic purposes. You create a VirtualMachineExport custom resource (CR) by using the command line interface. Alternatively, you can use the virtctl vmexport command to create a VirtualMachineExport CR and to download exported volumes. Note You can migrate virtual machines between OpenShift Virtualization clusters by using the Migration Toolkit for Virtualization . 7.8.1. Creating a VirtualMachineExport custom resource You can create a VirtualMachineExport custom resource (CR) to export the following objects: Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM. VM snapshot: Exports PVCs contained in a VirtualMachineSnapshot CR. PVC: Exports a PVC. If the PVC is used by another pod, such as the virt-launcher pod, the export remains in a Pending state until the PVC is no longer in use. The VirtualMachineExport CR creates internal and external links for the exported volumes. Internal links are valid within the cluster. External links can be accessed by using an Ingress or Route . The export server supports the following file formats: raw : Raw disk image file. gzip : Compressed disk image file. dir : PVC directory and files. tar.gz : Compressed PVC file. Prerequisites The VM must be shut down for a VM export. Procedure Create a VirtualMachineExport manifest to export a volume from a VirtualMachine , VirtualMachineSnapshot , or PersistentVolumeClaim CR according to the following example and save it as example-export.yaml : VirtualMachineExport example apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3 1 Specify the appropriate API group: "kubevirt.io" for VirtualMachine . "snapshot.kubevirt.io" for VirtualMachineSnapshot . "" for PersistentVolumeClaim . 2 Specify VirtualMachine , VirtualMachineSnapshot , or PersistentVolumeClaim . 3 Optional. The default duration is 2 hours. Create the VirtualMachineExport CR: USD oc create -f example-export.yaml Get the VirtualMachineExport CR: USD oc get vmexport example-export -o yaml The internal and external links for the exported volumes are displayed in the status stanza: Output example apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: "" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: "2022-06-21T14:10:09Z" reason: podReady status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2022-06-21T14:09:02Z" reason: pvcBound status: "True" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export 1 External links are accessible from outside the cluster by using an Ingress or Route . 2 Internal links are only valid inside the cluster. 7.8.2. Accessing exported virtual machine manifests After you export a virtual machine (VM) or snapshot, you can get the VirtualMachine manifest and related information from the export server. Prerequisites You exported a virtual machine or VM snapshot by creating a VirtualMachineExport custom resource (CR). Note VirtualMachineExport objects that have the spec.source.kind: PersistentVolumeClaim parameter do not generate virtual machine manifests. Procedure To access the manifests, you must first copy the certificates from the source cluster to the target cluster. Log in to the source cluster. Save the certificates to the cacert.crt file by running the following command: USD oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1 1 Replace <export_name> with the metadata.name value from the VirtualMachineExport object. Copy the cacert.crt file to the target cluster. Decode the token in the source cluster and save it to the token_decode file by running the following command: USD oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1 1 Replace <export_name> with the metadata.name value from the VirtualMachineExport object. Copy the token_decode file to the target cluster. Get the VirtualMachineExport custom resource by running the following command: USD oc get vmexport <export_name> -o yaml Review the status.links stanza, which is divided into external and internal sections. Note the manifests.url fields within each section: Example output apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: "kubevirt.io" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: #... links: external: #... manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: #... manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export 1 Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the public certificate for the external URL's ingress or route. 2 Contains a secret containing a header that is compatible with Containerized Data Importer (CDI). The header contains a text version of the export token. 3 Contains the VirtualMachine manifest, DataVolume manifest, if present, and a ConfigMap manifest that contains the certificate for the internal URL's export server. Log in to the target cluster. Get the Secret manifest by running the following command: USD curl --cacert cacert.crt <secret_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml" 1 Replace <secret_manifest_url> with an auth-header-secret URL from the VirtualMachineExport YAML output. 2 Reference the token_decode file that you created earlier. For example: USD curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml" Get the manifests of type: all , such as the ConfigMap and VirtualMachine manifests, by running the following command: USD curl --cacert cacert.crt <all_manifest_url> -H \ 1 "x-kubevirt-export-token:token_decode" -H \ 2 "Accept:application/yaml" 1 Replace <all_manifest_url> with a URL from the VirtualMachineExport YAML output. 2 Reference the token_decode file that you created earlier. For example: USD curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H "x-kubevirt-export-token:token_decode" -H "Accept:application/yaml" steps You can now create the ConfigMap and VirtualMachine objects on the target cluster by using the exported manifests. 7.9. Managing virtual machine instances If you have standalone virtual machine instances (VMIs) that were created independently outside of the OpenShift Virtualization environment, you can manage them by using the web console or by using oc or virtctl commands from the command-line interface (CLI). The virtctl command provides more virtualization options than the oc command. For example, you can use virtctl to pause a VM or expose a port. 7.9.1. About virtual machine instances A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the oc command-line interface (CLI). A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the OpenShift Virtualization environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs: List standalone VMIs and their details. Edit labels and annotations for a standalone VMI. Delete a standalone VMI. When you delete a VM, the associated VMI is automatically deleted. You delete a standalone VMI directly because it is not owned by VMs or other objects. Note Before you uninstall OpenShift Virtualization, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs. 7.9.2. Listing all virtual machine instances using the CLI You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the oc command-line interface (CLI). Procedure List all VMIs by running the following command: USD oc get vmis -A 7.9.3. Listing standalone virtual machine instances using the web console Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs). Note VMIs that are owned by VMs or other objects are not displayed in the web console. The web console displays only standalone VMIs. If you want to list all VMIs in your cluster, you must use the CLI. Procedure Click Virtualization VirtualMachines from the side menu. You can identify a standalone VMI by a dark colored badge to its name. 7.9.4. Editing a standalone virtual machine instance using the web console You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a standalone VMI to open the VirtualMachineInstance details page. On the Details tab, click the pencil icon beside Annotations or Labels . Make the relevant changes and click Save . 7.9.5. Deleting a standalone virtual machine instance using the CLI You can delete a standalone virtual machine instance (VMI) by using the oc command-line interface (CLI). Prerequisites Identify the name of the VMI that you want to delete. Procedure Delete the VMI by running the following command: USD oc delete vmi <vmi_name> 7.9.6. Deleting a standalone virtual machine instance using the web console Delete a standalone virtual machine instance (VMI) from the web console. Procedure In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu. Click Actions Delete VirtualMachineInstance . In the confirmation pop-up window, click Delete to permanently delete the standalone VMI. 7.10. Controlling virtual machine states You can stop, start, restart, and unpause virtual machines from the web console. You can use virtctl to manage virtual machine states and perform other actions from the CLI. For example, you can use virtctl to force stop a VM or expose a port. 7.10.1. Starting a virtual machine You can start a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to start. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Start VirtualMachine . To view comprehensive information about the selected virtual machine before you start it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Start . Note When you start virtual machine that is provisioned from a URL source for the first time, the virtual machine has a status of Importing while OpenShift Virtualization imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes. 7.10.2. Stopping a virtual machine You can stop a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to stop. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Stop VirtualMachine . To view comprehensive information about the selected virtual machine before you stop it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Stop . 7.10.3. Restarting a virtual machine You can restart a running virtual machine from the web console. Important To avoid errors, do not restart a virtual machine while it has a status of Importing . Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to restart. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Restart . To view comprehensive information about the selected virtual machine before you restart it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Restart . 7.10.4. Pausing a virtual machine You can pause a virtual machine from the web console. Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to pause. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Pause VirtualMachine . To view comprehensive information about the selected virtual machine before you pause it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Pause . 7.10.5. Unpausing a virtual machine You can unpause a paused virtual machine from the web console. Prerequisites At least one of your virtual machines must have a status of Paused . Procedure Click Virtualization VirtualMachines from the side menu. Find the row that contains the virtual machine that you want to unpause. Navigate to the appropriate menu for your use case: To stay on this page, where you can perform actions on multiple virtual machines: Click the Options menu located at the far right end of the row and click Unpause VirtualMachine . To view comprehensive information about the selected virtual machine before you unpause it: Access the VirtualMachine details page by clicking the name of the virtual machine. Click Actions Unpause . 7.11. Using virtual Trusted Platform Module devices Add a virtual Trusted Platform Module (vTPM) device to a new or existing virtual machine by editing the VirtualMachine (VM) or VirtualMachineInstance (VMI) manifest. Important Cloning or creating snapshots of virtual machines (VMs) with a vTPM device is not supported. Support for creating snapshots of VMs with vTPM devices is added in OpenShift Virtualization 4.18. 7.11.1. About vTPM devices A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip. You can use a vTPM device with any operating system, but Windows 11 requires the presence of a TPM chip to install or boot. A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip. If you do not enable vTPM, then the VM does not recognize a TPM device, even if the node has one. A vTPM device also protects virtual machines by storing secrets without physical hardware. OpenShift Virtualization supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. You must specify the storage class to be used by the PVC by setting the vmStateStorageClass attribute in the HyperConverged custom resource (CR): kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name> # ... Note The storage class must be of type Filesystem and support the ReadWriteMany (RWX) access mode. 7.11.2. Adding a vTPM device to a virtual machine Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also stores secrets for that VM. Prerequisites You have installed the OpenShift CLI ( oc ). You have configured a Persistent Volume Claim (PVC) to use a storage class of type Filesystem that supports the ReadWriteMany (RWX) access mode. This is necessary for the vTPM device data to persist across VM reboots. Procedure Run the following command to update the VM configuration: USD oc edit vm <vm_name> -n <namespace> Edit the VM specification to add the vTPM device. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2 # ... 1 Adds the vTPM device to the VM. 2 Specifies that the vTPM device state persists after the VM is shut down. The default value is false . To apply your changes, save and exit the editor. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. 7.12. Managing virtual machines with OpenShift Pipelines Red Hat OpenShift Pipelines is a Kubernetes-native CI/CD framework that allows developers to design and run each step of the CI/CD pipeline in its own container. The Scheduling, Scale, and Performance (SSP) Operator integrates OpenShift Virtualization with OpenShift Pipelines. The SSP Operator includes tasks and example pipelines that allow you to: Create and manage virtual machines (VMs), persistent volume claims (PVCs), and data volumes Run commands in VMs Manipulate disk images with libguestfs tools Important Managing virtual machines with Red Hat OpenShift Pipelines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.12.1. Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have installed OpenShift Pipelines . 7.12.2. Deploying the Scheduling, Scale, and Performance (SSP) resources The SSP Operator example Tekton Tasks and Pipelines are not deployed by default when you install OpenShift Virtualization. To deploy the SSP Operator's Tekton resources, enable the deployTektonTaskResources feature gate in the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Set the spec.featureGates.deployTektonTaskResources field to true . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: kubevirt-hyperconverged spec: tektonPipelinesNamespace: <user_namespace> 1 featureGates: deployTektonTaskResources: true 2 # ... 1 The namespace where the pipelines are to be run. 2 The feature gate to be enabled to deploy Tekton resources by SSP operator. Note The tasks and example pipelines remain available even if you disable the feature gate later. Save your changes and exit the editor. 7.12.3. Virtual machine tasks supported by the SSP Operator The following table shows the tasks that are included as part of the SSP Operator. Table 7.4. Virtual machine tasks supported by the SSP Operator Task Description create-vm-from-manifest Create a virtual machine from a provided manifest or with virtctl . create-vm-from-template Create a virtual machine from a template. copy-template Copy a virtual machine template. modify-vm-template Modify a virtual machine template. modify-data-object Create or delete data volumes or data sources. cleanup-vm Run a script or a command in a virtual machine and stop or delete the virtual machine afterward. disk-virt-customize Use the virt-customize tool to run a customization script on a target PVC. disk-virt-sysprep Use the virt-sysprep tool to run a sysprep script on a target PVC. wait-for-vmi-status Wait for a specific status of a virtual machine instance and fail or succeed based on the status. Note Virtual machine creation in pipelines now utilizes ClusterInstanceType and ClusterPreference instead of template-based tasks, which have been deprecated. The create-vm-from-template , copy-template , and modify-vm-template commands remain available but are not used in default pipeline tasks. 7.12.4. Example pipelines The SSP Operator includes the following example Pipeline manifests. You can run the example pipelines by using the web console or CLI. You might have to run more than one installer pipline if you need multiple versions of Windows. If you run more than one installer pipeline, each one requires unique parameters, such as the autounattend config map and base image name. For example, if you need Windows 10 and Windows 11 or Windows Server 2022 images, you have to run both the Windows efi installer pipeline and the Windows bios installer pipeline. However, if you need Windows 11 and Windows Server 2022 images, you have to run only the Windows efi installer pipeline. Windows EFI installer pipeline This pipeline installs Windows 11 or Windows Server 2022 into a new data volume from a Windows installation image (ISO file). A custom answer file is used to run the installation process. Windows BIOS installer pipeline This pipeline installs Windows 10 into a new data volume from a Windows installation image, also called an ISO file. A custom answer file is used to run the installation process. Windows customize pipeline This pipeline clones the data volume of a basic Windows 10, 11, or Windows Server 2022 installation, customizes it by installing Microsoft SQL Server Express or Microsoft Visual Studio Code, and then creates a new image and template. Note The example pipelines use a config map file with sysprep predefined by OpenShift Container Platform and suitable for Microsoft ISO files. For ISO files pertaining to different Windows editions, it may be necessary to create a new config map file with a system-specific sysprep definition. 7.12.4.1. Running the example pipelines using the web console You can run the example pipelines from the Pipelines menu in the web console. Procedure Click Pipelines Pipelines in the side menu. Select a pipeline to open the Pipeline details page. From the Actions list, select Start . The Start Pipeline dialog is displayed. Keep the default values for the parameters and then click Start to run the pipeline. The Details tab tracks the progress of each task and displays the pipeline status. 7.12.4.2. Running the example pipelines using the CLI Use a PipelineRun resource to run the example pipelines. A PipelineRun object is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a TaskRun object for each task in the pipeline. Procedure To run the Windows 10 installer pipeline, create the following PipelineRun manifest: apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-installer-run- labels: pipelinerun: windows10-installer-run spec: params: - name: winImageDownloadURL value: <link_to_windows_10_iso> 1 pipelineRef: name: windows10-installer taskRunSpecs: - pipelineTaskName: copy-template taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task status: {} 1 Specify the URL for the Windows 10 64-bit ISO file. The product language must be English (United States). Apply the PipelineRun manifest: USD oc apply -f windows10-installer-run.yaml To run the Windows 10 customize pipeline, create the following PipelineRun manifest: apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-customize-run- labels: pipelinerun: windows10-customize-run spec: params: - name: allowReplaceGoldenTemplate value: true - name: allowReplaceCustomizationTemplate value: true pipelineRef: name: windows10-customize taskRunSpecs: - pipelineTaskName: copy-template-customize taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-customize taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task - pipelineTaskName: copy-template-golden taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-golden taskServiceAccountName: modify-vm-template-task status: {} Apply the PipelineRun manifest: USD oc apply -f windows10-customize-run.yaml 7.12.5. Additional resources Creating CI/CD solutions for applications using Red Hat OpenShift Pipelines Creating a Windows VM 7.13. Advanced virtual machine management 7.13.1. Working with resource quotas for virtual machines Create and manage resource quotas for virtual machines. 7.13.1.1. Setting resource quota limits for virtual machines Resource quotas that only use requests automatically work with virtual machines (VMs). If your resource quota uses limits, you must manually set resource limits on VMs. Resource limits must be at least 100 MiB larger than resource requests. Procedure Set limits for a VM by editing the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: # ... resources: requests: memory: 128Mi limits: memory: 256Mi 1 1 This configuration is supported because the limits.memory value is at least 100Mi larger than the requests.memory value. Save the VirtualMachine manifest. 7.13.1.2. Additional resources Resource quotas per project Resource quotas across multiple projects 7.13.2. Specifying nodes for virtual machines You can place virtual machines (VMs) on specific nodes by using node placement rules. 7.13.2.1. About node placement for virtual machines To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if: You have several VMs. To ensure fault tolerance, you want them to run on different nodes. You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node. Your VMs require specific hardware features that are not present on all available nodes. You have a pod that adds capabilities to a node, and you want to place a VM on that node so that it can use those capabilities. Note Virtual machine placement relies on any existing node placement rules for workloads. If workloads are excluded from specific nodes on the component level, virtual machines cannot be placed on those nodes. You can use the following rule types in the spec field of a VirtualMachine manifest: nodeSelector Allows virtual machines to be scheduled on nodes that are labeled with the key-value pair or pairs that you specify in this field. The node must have labels that exactly match all listed pairs. affinity Enables you to use more expressive syntax to set rules that match nodes with virtual machines. For example, you can specify that a rule is a preference, rather than a hard requirement, so that virtual machines are still scheduled if the rule is not satisfied. Pod affinity, pod anti-affinity, and node affinity are supported for virtual machine placement. Pod affinity works for virtual machines because the VirtualMachine workload type is based on the Pod object. tolerations Allows virtual machines to be scheduled on nodes that have matching taints. If a taint is applied to a node, that node only accepts virtual machines that tolerate the taint. Note Affinity rules only apply during scheduling. OpenShift Container Platform does not reschedule running workloads if the constraints are no longer met. 7.13.2.2. Node placement examples The following example YAML file snippets use nodePlacement , affinity , and tolerations fields to customize node placement for virtual machines. 7.13.2.2.1. Example: VM node placement with nodeSelector In this example, the virtual machine requires a node that has metadata containing both example-key-1 = example-value-1 and example-key-2 = example-value-2 labels. Warning If there are no nodes that fit this description, the virtual machine is not scheduled. Example VM manifest metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2 # ... 7.13.2.2.2. Example: VM node placement with pod affinity and pod anti-affinity In this example, the VM must be scheduled on a node that has a running pod with the label example-key-1 = example-value-1 . If there is no such pod running on any node, the VM is not scheduled. If possible, the VM is not scheduled on a node that has any pod with the label example-key-2 = example-value-2 . However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname # ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 7.13.2.2.3. Example: VM node placement with node affinity In this example, the VM must be scheduled on a node that has the label example.io/example-key = example-value-1 or the label example.io/example-key = example-value-2 . The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled. If possible, the scheduler avoids nodes that have the label example-node-label-key = example-node-label-value . However, if all candidate nodes have this label, the scheduler ignores this constraint. Example VM manifest metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value # ... 1 If you use the requiredDuringSchedulingIgnoredDuringExecution rule type, the VM is not scheduled if the constraint is not met. 2 If you use the preferredDuringSchedulingIgnoredDuringExecution rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. 7.13.2.2.4. Example: VM node placement with tolerations In this example, nodes that are reserved for virtual machines are already labeled with the key=virtualization:NoSchedule taint. Because this virtual machine has matching tolerations , it can schedule onto the tainted nodes. Note A virtual machine that tolerates a taint is not required to schedule onto a node with that taint. Example VM manifest metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: "key" operator: "Equal" value: "virtualization" effect: "NoSchedule" # ... 7.13.2.3. Additional resources Specifying nodes for virtualization components Placing pods on specific nodes using node selectors Controlling pod placement on nodes using node affinity rules Controlling pod placement using node taints 7.13.3. Activating kernel samepage merging (KSM) OpenShift Virtualization can activate kernel samepage merging (KSM) when nodes are overloaded. KSM deduplicates identical data found in the memory pages of virtual machines (VMs). If you have very similar VMs, KSM can make it possible to schedule more VMs on a single node. Important You must only use KSM with trusted workloads. 7.13.3.1. Prerequisites Ensure that an administrator has configured KSM support on any nodes where you want OpenShift Virtualization to activate KSM. 7.13.3.2. About using OpenShift Virtualization to activate KSM You can configure OpenShift Virtualization to activate kernel samepage merging (KSM) when nodes experience memory overload. 7.13.3.2.1. Configuration methods You can enable or disable the KSM activation feature for all nodes by using the OpenShift Container Platform web console or by editing the HyperConverged custom resource (CR). The HyperConverged CR supports more granular configuration. CR configuration You can configure the KSM activation feature by editing the spec.configuration.ksmConfiguration stanza of the HyperConverged CR. You enable the feature and configure settings by editing the ksmConfiguration stanza. You disable the feature by deleting the ksmConfiguration stanza. You can allow OpenShift Virtualization to enable KSM on only a subset of nodes by adding node selection syntax to the ksmConfiguration.nodeLabelSelector field. Note Even if the KSM activation feature is disabled in OpenShift Virtualization, an administrator can still enable KSM on nodes that support it. 7.13.3.2.2. KSM node labels OpenShift Virtualization identifies nodes that are configured to support KSM and applies the following node labels: kubevirt.io/ksm-handler-managed: "false" This label is set to "true" when OpenShift Virtualization activates KSM on a node that is experiencing memory overload. This label is not set to "true" if an administrator activates KSM. kubevirt.io/ksm-enabled: "false" This label is set to "true" when KSM is activated on a node, even if OpenShift Virtualization did not activate KSM. These labels are not applied to nodes that do not support KSM. 7.13.3.3. Configuring KSM activation by using the web console You can allow OpenShift Virtualization to activate kernel samepage merging (KSM) on all nodes in your cluster by using the OpenShift Container Platform web console. Procedure From the side menu, click Virtualization Overview . Select the Settings tab. Select the Cluster tab. Expand Resource management . Enable or disable the feature for all nodes: Set Kernel Samepage Merging (KSM) to on. Set Kernel Samepage Merging (KSM) to off. 7.13.3.4. Configuring KSM activation by using the CLI You can enable or disable OpenShift Virtualization's kernel samepage merging (KSM) activation feature by editing the HyperConverged custom resource (CR). Use this method if you want OpenShift Virtualization to activate KSM on only a subset of nodes. Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the ksmConfiguration stanza: To enable the KSM activation feature for all nodes, set the nodeLabelSelector value to {} . For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: {} # ... To enable the KSM activation feature on a subset of nodes, edit the nodeLabelSelector field. Add syntax that matches the nodes where you want OpenShift Virtualization to enable KSM. For example, the following configuration allows OpenShift Virtualization to enable KSM on nodes where both <first_example_key> and <second_example_key> are set to "true" : apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: matchLabels: <first_example_key>: "true" <second_example_key>: "true" # ... To disable the KSM activation feature, delete the ksmConfiguration stanza. For example: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: # ... Save the file. 7.13.3.5. Additional resources Specifying nodes for virtual machines Placing pods on specific nodes using node selectors Managing kernel samepage merging in the Red Hat Enterprise Linux (RHEL) documentation 7.13.4. Configuring certificate rotation Configure certificate rotation parameters to replace existing certificates. 7.13.4.1. Configuring certificate rotation You can do this during OpenShift Virtualization installation in the web console or after installation in the HyperConverged custom resource (CR). Procedure Open the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Edit the spec.certConfig fields as shown in the following example. To avoid overloading the system, ensure that all values are greater than or equal to 10 minutes. Express all values as strings that comply with the golang ParseDuration format . apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3 1 The value of ca.renewBefore must be less than or equal to the value of ca.duration . 2 The value of server.duration must be less than or equal to the value of ca.duration . 3 The value of server.renewBefore must be less than or equal to the value of server.duration . Apply the YAML file to your cluster. 7.13.4.2. Troubleshooting certificate rotation parameters Deleting one or more certConfig values causes them to revert to the default values, unless the default values conflict with one of the following conditions: The value of ca.renewBefore must be less than or equal to the value of ca.duration . The value of server.duration must be less than or equal to the value of ca.duration . The value of server.renewBefore must be less than or equal to the value of server.duration . If the default values conflict with these conditions, you will receive an error. If you remove the server.duration value in the following example, the default value of 24h0m0s is greater than the value of ca.duration , conflicting with the specified conditions. Example certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s This results in the following error message: error: hyperconvergeds.hco.kubevirt.io "kubevirt-hyperconverged" could not be patched: admission webhook "validate-hco.kubevirt.io" denied the request: spec.certConfig: ca.duration is smaller than server.duration The error message only mentions the first conflict. Review all certConfig values before you proceed. 7.13.5. Configuring the default CPU model Use the defaultCPUModel setting in the HyperConverged custom resource (CR) to define a cluster-wide default CPU model. The virtual machine (VM) CPU model depends on the availability of CPU models within the VM and the cluster. If the VM does not have a defined CPU model: The defaultCPUModel is automatically set using the CPU model defined at the cluster-wide level. If both the VM and the cluster have a defined CPU model: The VM's CPU model takes precedence. If neither the VM nor the cluster have a defined CPU model: The host-model is automatically set using the CPU model defined at the host level. 7.13.5.1. Configuring the default CPU model Configure the defaultCPUModel by updating the HyperConverged custom resource (CR). You can change the defaultCPUModel while OpenShift Virtualization is running. Note The defaultCPUModel is case sensitive. Prerequisites Install the OpenShift CLI (oc). Procedure Open the HyperConverged CR by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the defaultCPUModel field to the CR and set the value to the name of a CPU model that exists in the cluster: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: "EPYC" Apply the YAML file to your cluster. 7.13.6. Using UEFI mode for virtual machines You can boot a virtual machine (VM) in Unified Extensible Firmware Interface (UEFI) mode. 7.13.6.1. About UEFI mode for virtual machines Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times. It stores all the information about initialization and startup in a file with a .efi extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer. 7.13.6.2. Booting virtual machines in UEFI mode You can configure a virtual machine to boot in UEFI mode by editing the VirtualMachine manifest. Prerequisites Install the OpenShift CLI ( oc ). Procedure Edit or create a VirtualMachine manifest file. Use the spec.firmware.bootloader stanza to configure UEFI mode: Booting in UEFI mode with secure boot active apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2 # ... 1 OpenShift Virtualization requires System Management Mode ( SMM ) to be enabled for Secure Boot in UEFI mode to occur. 2 OpenShift Virtualization supports a VM with or without Secure Boot when using UEFI mode. If Secure Boot is enabled, then UEFI mode is required. However, UEFI mode can be enabled without using Secure Boot. Apply the manifest to your cluster by running the following command: USD oc create -f <file_name>.yaml 7.13.7. Configuring PXE booting for virtual machines PXE booting, or network booting, is available in OpenShift Virtualization. Network booting allows a computer to boot and load an operating system or other program without requiring a locally attached storage device. For example, you can use it to choose your desired OS image from a PXE server when deploying a new host. 7.13.7.1. Prerequisites A Linux bridge must be connected . The PXE server must be connected to the same VLAN as the bridge. 7.13.7.2. PXE booting with a specified MAC address As an administrator, you can boot a client over the network by first creating a NetworkAttachmentDefinition object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server. Prerequisites A Linux bridge must be connected. The PXE server must be connected to the same VLAN as the bridge. Procedure Configure a PXE network on the cluster: Create the network attachment definition file for PXE network pxe-net-conf : apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { "cniVersion": "0.3.1", "name": "pxe-net-conf", 2 "type": "bridge", 3 "bridge": "bridge-interface", 4 "macspoofchk": false, 5 "vlan": 100, 6 "preserveDefaultVlan": false 7 } 1 The name for the NetworkAttachmentDefinition object. 2 The name for the configuration. It is recommended to match the configuration name to the name value of the network attachment definition. 3 The actual name of the Container Network Interface (CNI) plugin that provides the network for this network attachment definition. This example uses a Linux bridge CNI plugin. You can also use an OVN-Kubernetes localnet or an SR-IOV CNI plugin. 4 The name of the Linux bridge configured on the node. 5 Optional: A flag to enable the MAC spoof check. When set to true , you cannot change the MAC address of the pod or guest interface. This attribute allows only a single MAC address to exit the pod, which provides security against a MAC spoofing attack. 6 Optional: The VLAN tag. No additional VLAN configuration is required on the node network configuration policy. 7 Optional: Indicates whether the VM connects to the bridge through the default VLAN. The default value is true . Create the network attachment definition by using the file you created in the step: USD oc create -f pxe-net-conf.yaml Edit the virtual machine instance configuration file to include the details of the interface and network. Specify the network and MAC address, if required by the PXE server. If the MAC address is not specified, a value is assigned automatically. Ensure that bootOrder is set to 1 so that the interface boots first. In this example, the interface is connected to a network called <pxe-net> : interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1 Note Boot order is global for interfaces and disks. Assign a boot device number to the disk to ensure proper booting after operating system provisioning. Set the disk bootOrder value to 2 : devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2 Specify that the network is connected to the previously created network attachment definition. In this scenario, <pxe-net> is connected to the network attachment definition called <pxe-net-conf> : networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf Create the virtual machine instance: USD oc create -f vmi-pxe-boot.yaml Example output virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created Wait for the virtual machine instance to run: USD oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running View the virtual machine instance using VNC: USD virtctl vnc vmi-pxe-boot Watch the boot screen to verify that the PXE boot is successful. Log in to the virtual machine instance: USD virtctl console vmi-pxe-boot Verification Verify the interfaces and MAC address on the virtual machine and that the interface connected to the bridge has the specified MAC address. In this case, we used eth1 for the PXE boot, without an IP address. The other interface, eth0 , got an IP address from OpenShift Container Platform. USD ip addr Example output ... 3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff 7.13.7.3. OpenShift Virtualization networking glossary The following terms are used throughout OpenShift Virtualization documentation: Container Network Interface (CNI) A Cloud Native Computing Foundation project, focused on container network connectivity. OpenShift Virtualization uses CNI plugins to build upon the basic Kubernetes networking functionality. Multus A "meta" CNI plugin that allows multiple CNIs to exist so that a pod or virtual machine can use the interfaces it needs. Custom resource definition (CRD) A Kubernetes API resource that allows you to define custom resources, or an object defined by using the CRD API resource. Network attachment definition (NAD) A CRD introduced by the Multus project that allows you to attach pods, virtual machines, and virtual machine instances to one or more networks. Node network configuration policy (NNCP) A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. 7.13.8. Using huge pages with virtual machines You can use huge pages as backing memory for virtual machines in your cluster. 7.13.8.1. Prerequisites Nodes must have pre-allocated huge pages configured . 7.13.8.2. What huge pages do Memory is managed in blocks known as pages. On most systems, a page is 4Ki. 1Mi of memory is equal to 256 pages; 1Gi of memory is 256,000 pages, and so on. CPUs have a built-in memory management unit that manages a list of these pages in hardware. The Translation Lookaside Buffer (TLB) is a small hardware cache of virtual-to-physical page mappings. If the virtual address passed in a hardware instruction can be found in the TLB, the mapping can be determined quickly. If not, a TLB miss occurs, and the system falls back to slower, software-based address translation, resulting in performance issues. Since the size of the TLB is fixed, the only way to reduce the chance of a TLB miss is to increase the page size. A huge page is a memory page that is larger than 4Ki. On x86_64 architectures, there are two common huge page sizes: 2Mi and 1Gi. Sizes vary on other architectures. To use huge pages, code must be written so that applications are aware of them. Transparent Huge Pages (THP) attempt to automate the management of huge pages without application knowledge, but they have limitations. In particular, they are limited to 2Mi page sizes. THP can lead to performance degradation on nodes with high memory utilization or fragmentation due to defragmenting efforts of THP, which can lock memory pages. For this reason, some applications may be designed to (or recommend) usage of pre-allocated huge pages instead of THP. In OpenShift Virtualization, virtual machines can be configured to consume pre-allocated huge pages. 7.13.8.3. Configuring huge pages for virtual machines You can configure virtual machines to use pre-allocated huge pages by including the memory.hugepages.pageSize and resources.requests.memory parameters in your virtual machine configuration. The memory request must be divisible by the page size. For example, you cannot request 500Mi memory with a page size of 1Gi . Note The memory layouts of the host and the guest OS are unrelated. Huge pages requested in the virtual machine manifest apply to QEMU. Huge pages inside the guest can only be configured based on the amount of available memory of the virtual machine instance. If you edit a running virtual machine, the virtual machine must be rebooted for the changes to take effect. Prerequisites Nodes must have pre-allocated huge pages configured. For instructions, see Configuring huge pages at boot time . Procedure In your virtual machine configuration, add the resources.requests.memory and memory.hugepages.pageSize parameters to the spec.domain . The following configuration snippet is for a virtual machine that requests a total of 4Gi memory with a page size of 1Gi : kind: VirtualMachine # ... spec: domain: resources: requests: memory: "4Gi" 1 memory: hugepages: pageSize: "1Gi" 2 # ... 1 The total amount of memory requested for the virtual machine. This value must be divisible by the page size. 2 The size of each huge page. Valid values for x86_64 architecture are 1Gi and 2Mi . The page size must be smaller than the requested memory. Apply the virtual machine configuration: USD oc apply -f <virtual_machine>.yaml 7.13.9. Enabling dedicated resources for virtual machines To improve performance, you can dedicate node resources, such as CPU, to a virtual machine. 7.13.9.1. About dedicated resources When you enable dedicated resources for your virtual machine, your virtual machine's workload is scheduled on CPUs that will not be used by other processes. By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions. 7.13.9.2. Prerequisites The CPU Manager must be configured on the node. Verify that the node has the cpumanager = true label before scheduling virtual machine workloads. The virtual machine must be powered off. 7.13.9.3. Enabling dedicated resources for a virtual machine You enable dedicated resources for a virtual machine in the Details tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources. Procedure In the OpenShift Container Platform console, click Virtualization VirtualMachines from the side menu. Select a virtual machine to open the VirtualMachine details page. On the Configuration Scheduling tab, click the edit icon beside Dedicated Resources . Select Schedule this workload with dedicated resources (guaranteed policy) . Click Save . 7.13.10. Scheduling virtual machines You can schedule a virtual machine (VM) on a node by ensuring that the VM's CPU model and policy attribute are matched for compatibility with the CPU models and policy attributes supported by the node. 7.13.10.1. Policy attributes You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node. Policy attribute Description force The VM is forced to be scheduled on a node. This is true even if the host CPU does not support the VM's CPU. require Default policy that applies to a VM if the VM is not configured with a specific CPU model and feature specification. If a node is not configured to support CPU node discovery with this default policy attribute or any one of the other policy attributes, VMs are not scheduled on that node. Either the host CPU must support the VM's CPU or the hypervisor must be able to emulate the supported CPU model. optional The VM is added to a node if that VM is supported by the host's physical machine CPU. disable The VM cannot be scheduled with CPU node discovery. forbid The VM is not scheduled even if the feature is supported by the host CPU and CPU node discovery is enabled. 7.13.10.2. Setting a policy attribute and CPU feature You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor. Procedure Edit the domain spec of your VM configuration file. The following example sets the CPU feature and the require policy for a virtual machine (VM): apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2 1 Name of the CPU feature for the VM. 2 Policy attribute for the VM. 7.13.10.3. Scheduling virtual machines with the supported CPU model You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported. Procedure Edit the domain spec of your virtual machine configuration file. The following example shows a specific CPU model defined for a VM: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1 1 CPU model for the VM. 7.13.10.4. Scheduling virtual machines with the host model When the CPU model for a virtual machine (VM) is set to host-model , the VM inherits the CPU model of the node where it is scheduled. Procedure Edit the domain spec of your VM configuration file. The following example shows host-model being specified for the virtual machine: apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1 1 The VM that inherits the CPU model of the node where it is scheduled. 7.13.10.5. Scheduling virtual machines with a custom scheduler You can use a custom scheduler to schedule a virtual machine (VM) on a node. Prerequisites A secondary scheduler is configured for your cluster. Procedure Add the custom scheduler to the VM configuration by editing the VirtualMachine manifest. For example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: running: true template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio # ... 1 The name of the custom scheduler. If the schedulerName value does not match an existing scheduler, the virt-launcher pod stays in a Pending state until the specified scheduler is found. Verification Verify that the VM is using the custom scheduler specified in the VirtualMachine manifest by checking the virt-launcher pod events: View the list of pods in your cluster by entering the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m Run the following command to display the pod events: USD oc describe pod virt-launcher-vm-fedora-dpc87 The value of the From field in the output verifies that the scheduler name matches the custom scheduler specified in the VirtualMachine manifest: Example output [...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...] Additional resources Deploying a secondary scheduler 7.13.11. Configuring PCI passthrough The Peripheral Component Interconnect (PCI) passthrough feature enables you to access and manage hardware devices from a virtual machine (VM). When PCI passthrough is configured, the PCI devices function as if they were physically attached to the guest operating system. Cluster administrators can expose and manage host devices that are permitted to be used in the cluster by using the oc command-line interface (CLI). 7.13.11.1. Preparing nodes for GPU passthrough You can prevent GPU operands from deploying on worker nodes that you designated for GPU passthrough. 7.13.11.1.1. Preventing NVIDIA GPU operands from deploying on nodes If you use the NVIDIA GPU Operator in your cluster, you can apply the nvidia.com/gpu.deploy.operands=false label to nodes that you do not want to configure for GPU or vGPU operands. This label prevents the creation of the pods that configure GPU or vGPU operands and terminates the pods if they already exist. Prerequisites The OpenShift CLI ( oc ) is installed. Procedure Label the node by running the following command: USD oc label node <node_name> nvidia.com/gpu.deploy.operands=false 1 1 Replace <node_name> with the name of a node where you do not want to install the NVIDIA GPU operands. Verification Verify that the label was added to the node by running the following command: USD oc describe node <node_name> Optional: If GPU operands were previously deployed on the node, verify their removal. Check the status of the pods in the nvidia-gpu-operator namespace by running the following command: USD oc get pods -n nvidia-gpu-operator Example output NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-sandbox-validator-kxwj7 1/1 Terminating 0 9d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d nvidia-vfio-manager-zqtck 1/1 Terminating 0 9d Monitor the pod status until the pods with Terminating status are removed: USD oc get pods -n nvidia-gpu-operator Example output NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d 7.13.11.2. Preparing host devices for PCI passthrough 7.13.11.2.1. About preparing a host device for PCI passthrough To prepare a host device for PCI passthrough by using the CLI, create a MachineConfig object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the permittedHostDevices field of the HyperConverged custom resource (CR). The permittedHostDevices list is empty when you first install the OpenShift Virtualization Operator. To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the HyperConverged CR. 7.13.11.2.2. Adding kernel arguments to enable the IOMMU driver To enable the IOMMU driver in the kernel, create the MachineConfig object and add the kernel arguments. Prerequisites You have cluster administrator permissions. Your CPU hardware is Intel or AMD. You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS. Procedure Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 # ... 1 Applies the new kernel argument only to worker nodes. 2 The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on . 3 Identifies the kernel argument as intel_iommu for an Intel CPU. Create the new MachineConfig object: USD oc create -f 100-worker-kernel-arg-iommu.yaml Verification Verify that the new MachineConfig object was added. USD oc get MachineConfig 7.13.11.2.3. Binding PCI devices to the VFIO driver To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for vendor-ID and device-ID from each device and create a list with the values. Add this list to the MachineConfig object. The MachineConfig Operator generates the /etc/modprobe.d/vfio.conf on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver. Prerequisites You added kernel arguments to enable IOMMU for the CPU. Procedure Run the lspci command to obtain the vendor-ID and the device-ID for the PCI device. USD lspci -nnv | grep -i nvidia Example output 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) Create a Butane config file, 100-worker-vfiopci.bu , binding the PCI device to the VFIO driver. Note See "Creating machine configs with Butane" for information about Butane. Example variant: openshift version: 4.15.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci 1 Applies the new kernel argument only to worker nodes. 2 Specify the previously determined vendor-ID value ( 10de ) and the device-ID value ( 1eb8 ) to bind a single device to the VFIO driver. You can add a list of multiple devices with their vendor and device information. 3 The file that loads the vfio-pci kernel module on the worker nodes. Use Butane to generate a MachineConfig object file, 100-worker-vfiopci.yaml , containing the configuration to be delivered to the worker nodes: USD butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml Apply the MachineConfig object to the worker nodes: USD oc apply -f 100-worker-vfiopci.yaml Verify that the MachineConfig object was added. USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s Verification Verify that the VFIO driver is loaded. USD lspci -nnk -d 10de: The output confirms that the VFIO driver is being used. Example output 7.13.11.2.4. Exposing PCI host devices in the cluster using the CLI To expose PCI host devices in the cluster, add details about the PCI devices to the spec.permittedHostDevices.pciHostDevices array of the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Add the PCI device information to the spec.permittedHostDevices.pciHostDevices array. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: "10DE:1DB6" 3 resourceName: "nvidia.com/GV100GL_Tesla_V100" 4 - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" - pciDeviceSelector: "8086:6F54" resourceName: "intel.com/qat" externalResourceProvider: true 5 # ... 1 The host devices that are permitted to be used in the cluster. 2 The list of PCI devices available on the node. 3 The vendor-ID and the device-ID required to identify the PCI device. 4 The name of a PCI host device. 5 Optional: Setting this field to true indicates that the resource is provided by an external device plugin. OpenShift Virtualization allows the usage of this device in the cluster but leaves the allocation and monitoring to an external device plugin. Note The above example snippet shows two PCI host devices that are named nvidia.com/GV100GL_Tesla_V100 and nvidia.com/TU104GL_Tesla_T4 added to the list of permitted host devices in the HyperConverged CR. These devices have been tested and verified to work with OpenShift Virtualization. Save your changes and exit the editor. Verification Verify that the PCI host devices were added to the node by running the following command. The example output shows that there is one device each associated with the nvidia.com/GV100GL_Tesla_V100 , nvidia.com/TU104GL_Tesla_T4 , and intel.com/qat resource names. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 7.13.11.2.5. Removing PCI host devices from the cluster using the CLI To remove a PCI host device from the cluster, delete the information for that device from the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Remove the PCI device information from the spec.permittedHostDevices.pciHostDevices array by deleting the pciDeviceSelector , resourceName and externalResourceProvider (if applicable) fields for the appropriate device. In this example, the intel.com/qat resource has been deleted. Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: "10DE:1DB6" resourceName: "nvidia.com/GV100GL_Tesla_V100" - pciDeviceSelector: "10DE:1EB8" resourceName: "nvidia.com/TU104GL_Tesla_T4" # ... Save your changes and exit the editor. Verification Verify that the PCI host device was removed from the node by running the following command. The example output shows that there are zero devices associated with the intel.com/qat resource name. USD oc describe node <node_name> Example output Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 7.13.11.3. Configuring virtual machines for PCI passthrough After the PCI devices have been added to the cluster, you can assign them to virtual machines. The PCI devices are now available as if they are physically connected to the virtual machines. 7.13.11.3.1. Assigning a PCI device to a virtual machine When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough. Procedure Assign the PCI device to a virtual machine as a host device. Example apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1 1 The name of the PCI device that is permitted on the cluster as a host device. The virtual machine can access this host device. Verification Use the following command to verify that the host device is available from the virtual machine. USD lspci -nnk | grep NVIDIA Example output USD 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) 7.13.11.4. Additional resources Enabling Intel VT-X and AMD-V Virtualization Hardware Extensions in BIOS Managing file permissions Postinstallation machine configuration tasks 7.13.12. Configuring virtual GPUs If you have graphics processing unit (GPU) cards, OpenShift Virtualization can automatically create virtual GPUs (vGPUs) that you can assign to virtual machines (VMs). 7.13.12.1. About using virtual GPUs with OpenShift Virtualization Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). OpenShift Virtualization can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the HyperConverged custom resource (CR). This automation is especially useful for large clusters. Note Refer to your hardware vendor's documentation for functionality and support details. Mediated device A physical device that is divided into one or more virtual devices. A vGPU is a type of mediated device (mdev); the performance of the physical GPU is divided among the virtual devices. You can assign mediated devices to one or more virtual machines (VMs), but the number of guests must be compatible with your GPU. Some GPUs do not support multiple guests. 7.13.12.2. Preparing hosts for mediated devices You must enable the Input-Output Memory Management Unit (IOMMU) driver before you can configure mediated devices. 7.13.12.2.1. Adding kernel arguments to enable the IOMMU driver To enable the IOMMU driver in the kernel, create the MachineConfig object and add the kernel arguments. Prerequisites You have cluster administrator permissions. Your CPU hardware is Intel or AMD. You enabled Intel Virtualization Technology for Directed I/O extensions or AMD IOMMU in the BIOS. Procedure Create a MachineConfig object that identifies the kernel argument. The following example shows a kernel argument for an Intel CPU. apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3 # ... 1 Applies the new kernel argument only to worker nodes. 2 The name indicates the ranking of this kernel argument (100) among the machine configs and its purpose. If you have an AMD CPU, specify the kernel argument as amd_iommu=on . 3 Identifies the kernel argument as intel_iommu for an Intel CPU. Create the new MachineConfig object: USD oc create -f 100-worker-kernel-arg-iommu.yaml Verification Verify that the new MachineConfig object was added. USD oc get MachineConfig 7.13.12.3. Configuring the NVIDIA GPU Operator You can use the NVIDIA GPU Operator to provision worker nodes for running GPU-accelerated virtual machines (VMs) in OpenShift Virtualization. Note The NVIDIA GPU Operator is supported only by NVIDIA. For more information, see Obtaining Support from NVIDIA in the Red Hat Knowledgebase. 7.13.12.3.1. About using the NVIDIA GPU Operator You can use the NVIDIA GPU Operator with OpenShift Virtualization to rapidly provision worker nodes for running GPU-enabled virtual machines (VMs). The NVIDIA GPU Operator manages NVIDIA GPU resources in an OpenShift Container Platform cluster and automates tasks that are required when preparing nodes for GPU workloads. Before you can deploy application workloads to a GPU resource, you must install components such as the NVIDIA drivers that enable the compute unified device architecture (CUDA), Kubernetes device plugin, container runtime, and other features, such as automatic node labeling and monitoring. By automating these tasks, you can quickly scale the GPU capacity of your infrastructure. The NVIDIA GPU Operator can especially facilitate provisioning complex artificial intelligence and machine learning (AI/ML) workloads. 7.13.12.3.2. Options for configuring mediated devices There are two available methods for configuring mediated devices when using the NVIDIA GPU Operator. The method that Red Hat tests uses OpenShift Virtualization features to schedule mediated devices, while the NVIDIA method only uses the GPU Operator. Using the NVIDIA GPU Operator to configure mediated devices This method exclusively uses the NVIDIA GPU Operator to configure mediated devices. To use this method, refer to NVIDIA GPU Operator with OpenShift Virtualization in the NVIDIA documentation. Using OpenShift Virtualization to configure mediated devices This method, which is tested by Red Hat, uses OpenShift Virtualization's capabilities to configure mediated devices. In this case, the NVIDIA GPU Operator is only used for installing drivers with the NVIDIA vGPU Manager. The GPU Operator does not configure mediated devices. When using the OpenShift Virtualization method, you still configure the GPU Operator by following the NVIDIA documentation . However, this method differs from the NVIDIA documentation in the following ways: You must not overwrite the default disableMDEVConfiguration: false setting in the HyperConverged custom resource (CR). Important Setting this feature gate as described in the NVIDIA documentation prevents OpenShift Virtualization from configuring mediated devices. You must configure your ClusterPolicy manifest so that it matches the following example: Example manifest kind: ClusterPolicy apiVersion: nvidia.com/v1 metadata: name: gpu-cluster-policy spec: operator: defaultRuntime: crio use_ocp_driver_toolkit: true initContainer: {} sandboxWorkloads: enabled: true defaultWorkload: vm-vgpu driver: enabled: false 1 dcgmExporter: {} dcgm: enabled: true daemonsets: {} devicePlugin: {} gfd: {} migManager: enabled: true nodeStatusExporter: enabled: true mig: strategy: single toolkit: enabled: true validator: plugin: env: - name: WITH_WORKLOAD value: "true" vgpuManager: enabled: true 2 repository: <vgpu_container_registry> 3 image: <vgpu_image_name> version: nvidia-vgpu-manager vgpuDeviceManager: enabled: false 4 config: name: vgpu-devices-config default: default sandboxDevicePlugin: enabled: false 5 vfioManager: enabled: false 6 1 Set this value to false . Not required for VMs. 2 Set this value to true . Required for using vGPUs with VMs. 3 Substitute <vgpu_container_registry> with your registry value. 4 Set this value to false to allow OpenShift Virtualization to configure mediated devices instead of the NVIDIA GPU Operator. 5 Set this value to false to prevent discovery and advertising of the vGPU devices to the kubelet. 6 Set this value to false to prevent loading the vfio-pci driver. Instead, follow the OpenShift Virtualization documentation to configure PCI passthrough. Additional resources Configuring PCI passthrough 7.13.12.4. How vGPUs are assigned to nodes For each physical device, OpenShift Virtualization configures the following values: A single mdev type. The maximum number of instances of the selected mdev type. The cluster architecture affects how devices are created and assigned to nodes. Large cluster with multiple cards per node On nodes with multiple cards that can support similar vGPU types, the relevant device types are created in a round-robin manner. For example: # ... mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108 # ... In this scenario, each node has two cards, both of which support the following vGPU types: nvidia-105 # ... nvidia-108 nvidia-217 nvidia-299 # ... On each node, OpenShift Virtualization creates the following vGPUs: 16 vGPUs of type nvidia-105 on the first card. 2 vGPUs of type nvidia-108 on the second card. One node has a single card that supports more than one requested vGPU type OpenShift Virtualization uses the supported type that comes first on the mediatedDeviceTypes list. For example, the card on a node card supports nvidia-223 and nvidia-224 . The following mediatedDeviceTypes list is configured: # ... mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-22 - nvidia-223 - nvidia-224 # ... In this example, OpenShift Virtualization uses the nvidia-223 type. 7.13.12.5. Managing mediated devices Before you can assign mediated devices to virtual machines, you must create the devices and expose them to the cluster. You can also reconfigure and remove mediated devices. 7.13.12.5.1. Creating and exposing mediated devices As an administrator, you can create mediated devices and expose them to the cluster by editing the HyperConverged custom resource (CR). Prerequisites You enabled the Input-Output Memory Management Unit (IOMMU) driver. If your hardware vendor provides drivers, you installed them on the nodes where you want to create mediated devices. If you use NVIDIA cards, you installed the NVIDIA GRID driver . Procedure Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Example 7.1. Example configuration file with mediated devices configured apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 nodeMediatedDeviceTypes: - mediatedDeviceTypes: - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q # ... Create mediated devices by adding them to the spec.mediatedDevicesConfiguration stanza: Example YAML snippet # ... spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDeviceTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value> # ... 1 Required: Configures global settings for the cluster. 2 Optional: Overrides the global configuration for a specific node or group of nodes. Must be used with the global mediatedDeviceTypes configuration. 3 Required if you use nodeMediatedDeviceTypes . Overrides the global mediatedDeviceTypes configuration for the specified nodes. 4 Required if you use nodeMediatedDeviceTypes . Must include a key:value pair. Important Before OpenShift Virtualization 4.14, the mediatedDeviceTypes field was named mediatedDevicesTypes . Ensure that you use the correct field name when configuring mediated devices. Identify the name selector and resource name values for the devices that you want to expose to the cluster. You will add these values to the HyperConverged CR in the step. Find the resourceName value by running the following command: USD oc get USDNODE -o json \ | jq '.status.allocatable \ | with_entries(select(.key | startswith("nvidia.com/"))) \ | with_entries(select(.value != "0"))' Find the mdevNameSelector value by viewing the contents of /sys/bus/pci/devices/<slot>:<bus>:<domain>.<function>/mdev_supported_types/<type>/name , substituting the correct values for your system. For example, the name file for the nvidia-231 type contains the selector string GRID T4-2Q . Using GRID T4-2Q as the mdevNameSelector value allows nodes to use the nvidia-231 type. Expose the mediated devices to the cluster by adding the mdevNameSelector and resourceName values to the spec.permittedHostDevices.mediatedDevices stanza of the HyperConverged CR: Example YAML snippet # ... permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2 # ... 1 Exposes the mediated devices that map to this value on the host. 2 Matches the resource name that is allocated on the node. Save your changes and exit the editor. Verification Optional: Confirm that a device was added to a specific node by running the following command: USD oc describe node <node_name> 7.13.12.5.2. About changing and removing mediated devices You can reconfigure or remove mediated devices in several ways: Edit the HyperConverged CR and change the contents of the mediatedDeviceTypes stanza. Change the node labels that match the nodeMediatedDeviceTypes node selector. Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Note If you remove the device information from the spec.permittedHostDevices stanza without also removing it from the spec.mediatedDevicesConfiguration stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas. 7.13.12.5.3. Removing mediated devices from the cluster To remove a mediated device from the cluster, delete the information for that device from the HyperConverged custom resource (CR). Procedure Edit the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Remove the device information from the spec.mediatedDevicesConfiguration and spec.permittedHostDevices stanzas of the HyperConverged CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example: Example configuration file apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q 1 To remove the nvidia-231 device type, delete it from the mediatedDeviceTypes array. 2 To remove the GRID T4-2Q device, delete the mdevNameSelector field and its corresponding resourceName field. Save your changes and exit the editor. 7.13.12.6. Using mediated devices You can assign mediated devices to one or more virtual machines. 7.13.12.6.1. Assigning a vGPU to a VM by using the CLI Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs). Prerequisites The mediated device is configured in the HyperConverged custom resource. The VM is stopped. Procedure Assign the mediated device to a virtual machine (VM) by editing the spec.domain.devices.gpus stanza of the VirtualMachine manifest: Example virtual machine manifest apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-2Q name: gpu2 1 The resource name associated with the mediated device. 2 A name to identify the device on the VM. Verification To verify that the device is available from the virtual machine, run the following command, substituting <device_name> with the deviceName value from the VirtualMachine manifest: USD lspci -nnk | grep <device_name> 7.13.12.6.2. Assigning a vGPU to a VM by using the web console You can assign virtual GPUs to virtual machines by using the OpenShift Container Platform web console. Note You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems. Prerequisites The vGPU is configured as a mediated device in your cluster. To view the devices that are connected to your cluster, click Compute Hardware Devices from the side menu. The VM is stopped. Procedure In the OpenShift Container Platform web console, click Virtualization VirtualMachines from the side menu. Select the VM that you want to assign the device to. On the Details tab, click GPU devices . Click Add GPU device . Enter an identifying value in the Name field. From the Device name list, select the device that you want to add to the VM. Click Save . Verification To confirm that the devices were added to the VM, click the YAML tab and review the VirtualMachine configuration. Mediated devices are added to the spec.domain.devices stanza. 7.13.12.7. Additional resources Enabling Intel VT-X and AMD-V Virtualization Hardware Extensions in BIOS 7.13.13. Enabling descheduler evictions on virtual machines You can use the descheduler to evict pods so that the pods can be rescheduled onto more appropriate nodes. If the pod is a virtual machine, the pod eviction causes the virtual machine to be live migrated to another node. Important Descheduler eviction for virtual machines is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 7.13.13.1. Descheduler profiles Use the Technology Preview DevPreviewLongLifecycle profile to enable the descheduler on a virtual machine. This is the only descheduler profile currently available for OpenShift Virtualization. To ensure proper scheduling, create VMs with CPU and memory requests for the expected load. DevPreviewLongLifecycle This profile balances resource usage between nodes and enables the following strategies: RemovePodsHavingTooManyRestarts : removes pods whose containers have been restarted too many times and pods where the sum of restarts over all containers (including Init Containers) is more than 100. Restarting the VM guest operating system does not increase this count. LowNodeUtilization : evicts pods from overutilized nodes when there are any underutilized nodes. The destination node for the evicted pod will be determined by the scheduler. A node is considered underutilized if its usage is below 20% for all thresholds (CPU, memory, and number of pods). A node is considered overutilized if its usage is above 50% for any of the thresholds (CPU, memory, and number of pods). 7.13.13.2. Installing the descheduler The descheduler is not available by default. To enable the descheduler, you must install the Kube Descheduler Operator from OperatorHub and enable one or more descheduler profiles. By default, the descheduler runs in predictive mode, which means that it only simulates pod evictions. You must change the mode to automatic for the descheduler to perform the pod evictions. Important If you have enabled hosted control planes in your cluster, set a custom priority threshold to lower the chance that pods in the hosted control plane namespaces are evicted. Set the priority threshold class name to hypershift-control-plane , because it has the lowest priority value ( 100000000 ) of the hosted control plane priority classes. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Access to the OpenShift Container Platform web console. Procedure Log in to the OpenShift Container Platform web console. Create the required namespace for the Kube Descheduler Operator. Navigate to Administration Namespaces and click Create Namespace . Enter openshift-kube-descheduler-operator in the Name field, enter openshift.io/cluster-monitoring=true in the Labels field to enable descheduler metrics, and click Create . Install the Kube Descheduler Operator. Navigate to Operators OperatorHub . Type Kube Descheduler Operator into the filter box. Select the Kube Descheduler Operator and click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-kube-descheduler-operator from the drop-down menu. Adjust the values for the Update Channel and Approval Strategy to the desired values. Click Install . Create a descheduler instance. From the Operators Installed Operators page, click the Kube Descheduler Operator . Select the Kube Descheduler tab and click Create KubeDescheduler . Edit the settings as necessary. To evict pods instead of simulating the evictions, change the Mode field to Automatic . Expand the Profiles section and select DevPreviewLongLifecycle . The AffinityAndTaints profile is enabled by default. Important The only profile currently available for OpenShift Virtualization is DevPreviewLongLifecycle . You can also configure the profiles and settings for the descheduler later using the OpenShift CLI ( oc ). 7.13.13.3. Enabling descheduler evictions on a virtual machine (VM) After the descheduler is installed, you can enable descheduler evictions on your VM by adding an annotation to the VirtualMachine custom resource (CR). Prerequisites Install the descheduler in the OpenShift Container Platform web console or OpenShift CLI ( oc ). Ensure that the VM is not running. Procedure Before starting the VM, add the descheduler.alpha.kubernetes.io/evict annotation to the VirtualMachine CR: apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: "true" If you did not already set the DevPreviewLongLifecycle profile in the web console during installation, specify the DevPreviewLongLifecycle in the spec.profile section of the KubeDescheduler object: apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive 1 1 By default, the descheduler does not evict pods. To evict pods, set mode to Automatic . The descheduler is now enabled on the VM. 7.13.13.4. Additional resources Descheduler overview 7.13.14. About high availability for virtual machines You can enable high availability for virtual machines (VMs) by manually deleting a failed node to trigger VM failover or by configuring remediating nodes. Manually deleting a failed node If a node fails and machine health checks are not deployed on your cluster, virtual machines with runStrategy: Always configured are not automatically relocated to healthy nodes. To trigger VM failover, you must manually delete the Node object. See Deleting a failed node to trigger virtual machine failover . Configuring remediating nodes You can configure remediating nodes by installing the Self Node Remediation Operator or the Fence Agents Remediation Operator from the OperatorHub and enabling machine health checks or node remediation checks. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. 7.13.15. Virtual machine control plane tuning OpenShift Virtualization offers the following tuning options at the control-plane level: The highBurst profile, which uses fixed QPS and burst rates, to create hundreds of virtual machines (VMs) in one batch Migration setting adjustment based on workload type 7.13.15.1. Configuring a highBurst profile Use the highBurst profile to create and maintain a large number of virtual machines (VMs) in one cluster. Procedure Apply the following patch to enable the highBurst tuning policy profile: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv \ --type=json -p='[{"op": "add", "path": "/spec/tuningPolicy", \ "value": "highBurst"}]' Verification Run the following command to verify the highBurst tuning policy profile is enabled: USD oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged \ -n openshift-cnv -o go-template --template='{{range USDconfig, \ USDvalue := .spec.configuration}} {{if eq USDconfig "apiConfiguration" \ "webhookConfiguration" "controllerConfiguration" "handlerConfiguration"}} \ {{"\n"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{"\n"}} 7.13.16. Assigning compute resources In OpenShift Virtualization, compute resources assigned to virtual machines (VMs) are backed by either guaranteed CPUs or time-sliced CPU shares. Guaranteed CPUs, also known as CPU reservation, dedicate CPU cores or threads to a specific workload, which makes them unavailable to any other workload. Assigning guaranteed CPUs to a VM ensures that the VM will have sole access to a reserved physical CPU. Enable dedicated resources for VMs to use a guaranteed CPU. Time-sliced CPUs dedicate a slice of time on a shared physical CPU to each workload. You can specify the size of the slice during VM creation, or when the VM is offline. By default, each vCPU receives 100 milliseconds, or 1/10 of a second, of physical CPU time. The type of CPU reservation depends on the instance type or VM configuration. 7.13.16.1. Overcommitting CPU resources Time-slicing allows multiple virtual CPUs (vCPUs) to share a single physical CPU. This is known as CPU overcommitment . Guaranteed VMs can not be overcommitted. Configure CPU overcommitment to prioritize VM density over performance when assigning CPUs to VMs. With a higher CPU over-commitment of vCPUs, more VMs fit onto a given node. 7.13.16.2. Setting the CPU allocation ratio The CPU Allocation Ratio specifies the degree of overcommitment by mapping vCPUs to time slices of physical CPUs. For example, a mapping or ratio of 10:1 maps 10 virtual CPUs to 1 physical CPU by using time slices. To change the default number of vCPUs mapped to each physical CPU, set the vmiCPUAllocationRatio value in the HyperConverged CR. The pod CPU request is calculated by multiplying the number of vCPUs by the reciprocal of the CPU allocation ratio. For example, if vmiCPUAllocationRatio is set to 10, OpenShift Virtualization will request 10 times fewer CPUs on the pod for that VM. Procedure Set the vmiCPUAllocationRatio value in the HyperConverged CR to define a node CPU allocation ratio. Open the HyperConverged CR in your default editor by running the following command: USD oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv Set the vmiCPUAllocationRatio : ... spec: resourceRequirements: vmiCPUAllocationRatio: 1 1 # ... 1 When vmiCPUAllocationRatio is set to 1 , the maximum amount of vCPUs are requested for the pod. 7.13.16.3. Additional resources Pod Quality of Service Classes 7.14. VM disks 7.14.1. Hot-plugging VM disks You can add or remove virtual disks without stopping your virtual machine (VM) or virtual machine instance (VMI). Only data volumes and persistent volume claims (PVCs) can be hot plugged and hot-unplugged. You cannot hot plug or hot-unplug container disks. A hot plugged disk remains attached to the VM even after reboot. You must detach the disk to remove it from the VM. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Note Each VM has a virtio-scsi controller so that hot plugged disks can use the scsi bus. The virtio-scsi controller overcomes the limitations of virtio while retaining its performance advantages. It is highly scalable and supports hot plugging over 4 million disks. Regular virtio is not available for hot plugged disks because it is not scalable. Each virtio disk uses one of the limited PCI Express (PCIe) slots in the VM. PCIe slots are also used by other devices and must be reserved in advance. Therefore, slots might not be available on demand. 7.14.1.1. Hot plugging and hot unplugging a disk by using the web console You can hot plug a disk by attaching it to a virtual machine (VM) while the VM is running by using the OpenShift Container Platform web console. The hot plugged disk remains attached to the VM until you unplug it. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Prerequisites You must have a data volume or persistent volume claim (PVC) available for hot plugging. Procedure Navigate to Virtualization VirtualMachines in the web console. Select a running VM to view its details. On the VirtualMachine details page, click Configuration Disks . Add a hot plugged disk: Click Add disk . In the Add disk (hot plugged) window, select the disk from the Source list and click Save . Optional: Unplug a hot plugged disk: Click the options menu beside the disk and select Detach . Click Detach . Optional: Make a hot plugged disk persistent: Click the options menu beside the disk and select Make persistent . Reboot the VM to apply the change. 7.14.1.2. Hot plugging and hot unplugging a disk by using the command line You can hot plug and hot unplug a disk while a virtual machine (VM) is running by using the command line. You can make a hot plugged disk persistent so that it is permanently mounted on the VM. Prerequisites You must have at least one data volume or persistent volume claim (PVC) available for hot plugging. Procedure Hot plug a disk by running the following command: USD virtctl addvolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> \ [--persist] [--serial=<label-name>] Use the optional --persist flag to add the hot plugged disk to the virtual machine specification as a permanently mounted virtual disk. Stop, restart, or reboot the virtual machine to permanently mount the virtual disk. After specifying the --persist flag, you can no longer hot plug or hot unplug the virtual disk. The --persist flag applies to virtual machines, not virtual machine instances. The optional --serial flag allows you to add an alphanumeric string label of your choice. This helps you to identify the hot plugged disk in a guest virtual machine. If you do not specify this option, the label defaults to the name of the hot plugged data volume or PVC. Hot unplug a disk by running the following command: USD virtctl removevolume <virtual-machine|virtual-machine-instance> \ --volume-name=<datavolume|PVC> 7.14.2. Expanding virtual machine disks You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. If your storage provider does not support volume expansion, you can expand the available virtual storage of a VM by adding blank data volumes. You cannot reduce the size of a VM disk. 7.14.2.1. Expanding a VM disk PVC You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. If the PVC uses the file system volume mode, the disk image file expands to the available size while reserving some space for file system overhead. Procedure Edit the PersistentVolumeClaim manifest of the VM disk that you want to expand: USD oc edit pvc <pvc_name> Update the disk size: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1 # ... 1 Specify the new disk size. Additional resources for volume expansion Extending a basic volume in Windows Extending an existing file system partition without destroying data in Red Hat Enterprise Linux Extending a logical volume and its file system online in Red Hat Enterprise Linux 7.14.2.2. Expanding available virtual storage by adding blank data volumes You can expand the available storage of a virtual machine (VM) by adding blank data volumes. Prerequisites You must have at least one persistent volume. Procedure Create a DataVolume manifest as shown in the following example: Example DataVolume manifest apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: "<storage_class>" 2 1 Specify the amount of available space requested for the data volume. 2 Optional: If you do not specify a storage class, the default storage class is used. Create the data volume by running the following command: USD oc create -f <blank-image-datavolume>.yaml Additional resources for data volumes Configuring preallocation mode for data volumes Managing data volume annotations 7.14.3. Configuring shared volumes for virtual machines You can configure shared disks to allow multiple virtual machines (VMs) to share the same underlying storage. A shared disk's volume must be block mode. You configure disk sharing by exposing the storage as either of these types: An ordinary VM disk A logical unit number (LUN) disk with an SCSI connection and raw device mapping, as required for Windows Failover Clustering for shared volumes In addition to configuring disk sharing, you can also set an error policy for each ordinary VM disk or LUN disk. The error policy controls how the hypervisor behaves when an input/output error occurs on a disk Read or Write. 7.14.3.1. Configuring disk sharing by using virtual machine disks You can configure block volumes so that multiple virtual machines (VMs) can share storage. The application running on the guest operating system determines the storage option you must configure for the VM. A disk of type disk exposes the volume as an ordinary disk to the VM. Prerequisites The volume access mode must be ReadWriteMany (RWX) if the VMs that are sharing disks are running on different nodes. If the VMs that are sharing disks are running on the same node, ReadWriteOnce (RWO) volume access mode is sufficient. The storage provider must support the required Container Storage Interface (CSI) driver. Procedure Create the VirtualMachine manifest for your VM to set the required values, as shown in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: # ... spec: domain: devices: disks: - disk: bus: virtio name: rootdisk disk1: disk_one 1 - disk: bus: virtio name: cloudinitdisk disk2: disk_two shareable: true 2 interfaces: - masquerade: {} name: default 1 Identifies a device as a disk. 2 Identifies a shared disk. Save the VirtualMachine manifest file to apply your changes. 7.14.3.2. Configuring disk sharing by using LUN To secure data on your VM from outside access, you can enable SCSI persistent reservation and configure a LUN-backed virtual machine disk to be shared among multiple virtual machines. By enabling the shared option, you can use advanced SCSI commands, such as those required for a Windows failover clustering implementation, for managing the underlying storage. When a storage volume is configured as the LUN disk type, a VM can use the volume as a logical unit number (LUN) device. As a result, the VM can deploy and manage the disk by using SCSI commands. You reserve a LUN through the SCSI persistent reserve options. To enable the reservation: Configure the feature gate option Activate the feature gate option on the LUN disk to issue SCSI device-specific input and output controls (IOCTLs) that the VM requires. Important OpenShift Virtualization does not currently support SCSI-3 Persistent Reservations (SCSI-3 PR) over multipath storage. As a workaround, disable multipath or ensure the Windows Server Failover Clustering (WSFC) shared disk is setup from a single device and not part of multipath. Prerequisites You must have cluster administrator privileges to configure the feature gate option. The volume access mode must be ReadWriteMany (RWX) if the VMs that are sharing disks are running on different nodes. If the VMs that are sharing disks are running on the same node, ReadWriteOnce (RWO) volume access mode is sufficient. The storage provider must support a Container Storage Interface (CSI) driver that uses Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), or iSCSI storage protocols. If you are a cluster administrator and intend to configure disk sharing by using LUN, you must enable the cluster's feature gate on the HyperConverged custom resource (CR). Disks that you want to share must be in block mode. Procedure Edit or create the VirtualMachine manifest for your VM to set the required values, as shown in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report lun: 1 bus: scsi reservation: true 2 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share 1 Identifies a LUN disk. 2 Identifies that the persistent reservation is enabled. Save the VirtualMachine manifest file to apply your changes. 7.14.3.2.1. Configuring disk sharing by using LUN and the web console You can use the OpenShift Container Platform web console to configure disk sharing by using LUN. Prerequisites The cluster administrator must enable the persistentreservation feature gate setting. Procedure Click Virtualization VirtualMachines in the web console. Select a VM to open the VirtualMachine details page. Expand Storage . On the Disks tab, click Add disk . Specify the Name , Source , Size , Interface , and Storage Class . Select LUN as the Type . Select Shared access (RWX) as the Access Mode . Select Block as the Volume Mode . Expand Advanced Settings , and select both checkboxes. Click Save . 7.14.3.2.2. Configuring disk sharing by using LUN and the command line You can use the command line to configure disk sharing by using LUN. Procedure Edit or create the VirtualMachine manifest for your VM to set the required values, as shown in the following example: apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report lun: 1 bus: scsi reservation: true 2 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share 1 Identifies a LUN disk. 2 Identifies that the persistent reservation is enabled. Save the VirtualMachine manifest file to apply your changes. 7.14.3.3. Enabling the PersistentReservation feature gate You can enable the SCSI persistentReservation feature gate and allow a LUN-backed block mode virtual machine (VM) disk to be shared among multiple virtual machines. The persistentReservation feature gate is disabled by default. You can enable the persistentReservation feature gate by using the web console or the command line. Prerequisites Cluster administrator privileges are required. The volume access mode ReadWriteMany (RWX) is required if the VMs that are sharing disks are running on different nodes. If the VMs that are sharing disks are running on the same node, the ReadWriteOnce (RWO) volume access mode is sufficient. The storage provider must support a Container Storage Interface (CSI) driver that uses Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), or iSCSI storage protocols. 7.14.3.3.1. Enabling the PersistentReservation feature gate by using the web console You must enable the PersistentReservation feature gate to allow a LUN-backed block mode virtual machine (VM) disk to be shared among multiple virtual machines. Enabling the feature gate requires cluster administrator privileges. Procedure Click Virtualization Overview in the web console. Click the Settings tab. Select Cluster . Expand SCSI persistent reservation and set Enable persistent reservation to on. 7.14.3.3.2. Enabling the PersistentReservation feature gate by using the command line You enable the persistentReservation feature gate by using the command line. Enabling the feature gate requires cluster administrator privileges. Procedure Enable the persistentReservation feature gate by running the following command: USD oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p \ '[{"op":"replace","path":"/spec/featureGates/persistentReservation", "value": true}]' Additional resources Persistent reservation helper protocol Failover Clustering in Windows Server and Azure Stack HCI
[ "apiVersion: instancetype.kubevirt.io/v1beta1 kind: VirtualMachineInstancetype metadata: name: example-instancetype spec: cpu: guest: 1 1 memory: guest: 128Mi 2", "virtctl create instancetype --cpu 2 --memory 256Mi", "virtctl create instancetype --cpu 2 --memory 256Mi | oc apply -f -", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: rhel-9-minimal spec: dataVolumeTemplates: - metadata: name: rhel-9-minimal-volume spec: sourceRef: kind: DataSource name: rhel9 1 namespace: openshift-virtualization-os-images 2 storage: {} instancetype: name: u1.medium 3 preference: name: rhel.9 4 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: rhel-9-minimal-volume name: rootdisk", "oc create -f <vm_manifest_file>.yaml", "virtctl start <vm_name> -n <namespace>", "cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF", "podman build -t <registry>/<container_disk_name>:latest .", "podman push <registry>/<container_disk_name>:latest", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: storageImport: insecureRegistries: 1 - \"private-registry-example-1:5000\" - \"private-registry-example-2:5000\"", "apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 1 secretKey: \"\" 2", "oc apply -f data-source-secret.yaml", "oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: registry: url: \"docker://kubevirt/fedora-cloud-container-disk-demo:latest\" 5 secretRef: data-source-secret 6 certConfigMap: tls-certs 7 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}", "oc create -f vm-fedora-datavolume.yaml", "oc get pods", "oc describe dv fedora-dv 1", "virtctl console vm-fedora-datavolume", "apiVersion: v1 kind: Secret metadata: name: data-source-secret labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 1 secretKey: \"\" 2", "oc apply -f data-source-secret.yaml", "oc create configmap tls-certs 1 --from-file=</path/to/file/ca.pem> 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi 3 storageClassName: <storage_class> 4 source: http: url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 5 registry: url: \"docker://kubevirt/fedora-cloud-container-disk-demo:latest\" 6 secretRef: data-source-secret 7 certConfigMap: tls-certs 8 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 180 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}", "oc create -f vm-fedora-datavolume.yaml", "oc get pods", "oc describe dv fedora-dv 1", "virtctl console vm-fedora-datavolume", "%WINDIR%\\System32\\Sysprep\\sysprep.exe /generalize /shutdown /oobe /mode:vm", "virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3", "oc get dvs", "yum install -y qemu-guest-agent", "systemctl enable --now qemu-guest-agent", "oc get vm <vm_name>", "net start", "spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk", "virtctl start <vm> -n <namespace>", "oc apply -f <vm.yaml>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/cloneFallbackReason: The volume modes of source and target are incompatible cdi.kubevirt.io/clonePhase: Succeeded cdi.kubevirt.io/cloneType: copy", "NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE test-ns 0s Warning IncompatibleVolumeModes persistentvolumeclaim/test-target The volume modes of source and target are incompatible", "kind: VolumeSnapshotClass apiVersion: snapshot.storage.k8s.io/v1 driver: openshift-storage.rbd.csi.ceph.com", "kind: StorageClass apiVersion: storage.k8s.io/v1 provisioner: openshift-storage.rbd.csi.ceph.com", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <datavolume> 1 spec: source: pvc: namespace: \"<source_namespace>\" 2 name: \"<my_vm_disk>\" 3 storage: {}", "oc create -f <datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: storage: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: <source_namespace> 2 name: \"<source_pvc>\" 3", "oc create -f <vm-clone-datavolumetemplate>.yaml", "virtctl vnc <vm_name>", "virtctl vnc <vm_name> -v 4", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/featureGates/deployVmConsoleProxy\", \"value\": true}]'", "curl --header \"Authorization: Bearer USD{TOKEN}\" \"https://api.<cluster_fqdn>/apis/token.kubevirt.io/v1alpha1/namespaces/<namespace>/virtualmachines/<vm_name>/vnc?duration=<duration>\"", "{ \"token\": \"eyJhb...\" }", "export VNC_TOKEN=\"<token>\"", "oc login --token USD{VNC_TOKEN}", "virtctl vnc <vm_name> -n <namespace>", "virtctl delete serviceaccount --namespace \"<namespace>\" \"<vm_name>-vnc-access\"", "virtctl console <vm_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config user: cloud-user name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: noCloud: {} source: secret: secretName: authorized-keys", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: dataVolumeTemplates: - metadata: name: example-vm-volume spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: {} instancetype: name: u1.medium preference: name: rhel.9 running: true template: spec: domain: devices: {} volumes: - dataVolume: name: example-vm-volume name: rootdisk - cloudInitNoCloud: 1 userData: |- #cloud-config runcmd: - [ setsebool, -P, virt_qemu_ga_manage_ssh, on ] name: cloudinitdisk accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys 2 --- apiVersion: v1 kind: Secret metadata: name: authorized-keys data: key: c3NoLXJzYSB... 3", "oc create -f <manifest_file>.yaml", "virtctl start vm example-vm -n example-namespace", "oc describe vm example-vm -n example-namespace", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: template: spec: accessCredentials: - sshPublicKey: propagationMethod: qemuGuestAgent: users: [\"cloud-user\"] source: secret: secretName: authorized-keys", "virtctl -n <namespace> ssh <username>@example-vm -i <ssh_key> 1", "virtctl -n my-namespace ssh cloud-user@example-vm -i my-key", "Host vm/* ProxyCommand virtctl port-forward --stdio=true %h %p", "ssh <user>@vm/<vm_name>.<namespace>", "virtctl expose vm <vm_name> --name <service_name> --type <service_type> --port <port> 1", "virtctl expose vm example-vm --name example-service --type NodePort --port 22", "oc get service", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm namespace: example-namespace spec: running: false template: metadata: labels: special: key 1", "apiVersion: v1 kind: Service metadata: name: example-service namespace: example-namespace spec: selector: special: key 1 type: NodePort 2 ports: 3 protocol: TCP port: 80 targetPort: 9376 nodePort: 30000", "oc create -f example-service.yaml", "oc get service -n example-namespace", "ssh <user_name>@<ip_address> -p <port> 1", "oc describe vm <vm_name> -n <namespace>", "Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default", "ssh <user_name>@<ip_address> -i <ssh_key>", "ssh [email protected] -i ~/.ssh/id_rsa_cloud-user", "oc edit vm <vm_name>", "oc apply vm <vm_name> -n <namespace>", "oc edit vm <vm_name> -n <namespace>", "disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default", "oc delete vm <vm_name>", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" 1 kind: VirtualMachine 2 name: example-vm ttlDuration: 1h 3", "oc create -f example-export.yaml", "oc get vmexport example-export -o yaml", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export namespace: example spec: source: apiGroup: \"\" kind: PersistentVolumeClaim name: example-pvc tokenSecretRef: example-token status: conditions: - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:10:09Z\" reason: podReady status: \"True\" type: Ready - lastProbeTime: null lastTransitionTime: \"2022-06-21T14:09:02Z\" reason: pvcBound status: \"True\" type: PVCReady links: external: 1 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img - format: gzip url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/volumes/example-disk/disk.img.gz name: example-disk internal: 2 cert: |- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- volumes: - formats: - format: raw url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img - format: gzip url: https://virt-export-example-export.example.svc/volumes/example-disk/disk.img.gz name: example-disk phase: Ready serviceName: virt-export-example-export", "oc get vmexport <export_name> -o jsonpath={.status.links.external.cert} > cacert.crt 1", "oc get secret export-token-<export_name> -o jsonpath={.data.token} | base64 --decode > token_decode 1", "oc get vmexport <export_name> -o yaml", "apiVersion: export.kubevirt.io/v1alpha1 kind: VirtualMachineExport metadata: name: example-export spec: source: apiGroup: \"kubevirt.io\" kind: VirtualMachine name: example-vm tokenSecretRef: example-token status: # links: external: # manifests: - type: all url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all 1 - type: auth-header-secret url: https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret 2 internal: # manifests: - type: all url: https://virt-export-export-pvc.default.svc/internal/manifests/all 3 - type: auth-header-secret url: https://virt-export-export-pvc.default.svc/internal/manifests/secret phase: Ready serviceName: virt-export-example-export", "curl --cacert cacert.crt <secret_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/secret -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "curl --cacert cacert.crt <all_manifest_url> -H \\ 1 \"x-kubevirt-export-token:token_decode\" -H \\ 2 \"Accept:application/yaml\"", "curl --cacert cacert.crt https://vmexport-proxy.test.net/api/export.kubevirt.io/v1alpha1/namespaces/example/virtualmachineexports/example-export/external/manifests/all -H \"x-kubevirt-export-token:token_decode\" -H \"Accept:application/yaml\"", "oc get vmis -A", "oc delete vmi <vmi_name>", "kind: HyperConverged metadata: name: kubevirt-hyperconverged spec: vmStateStorageClass: <storage_class_name>", "oc edit vm <vm_name> -n <namespace>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: example-vm spec: template: spec: domain: devices: tpm: 1 persistent: true 2", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: kubevirt-hyperconverged spec: tektonPipelinesNamespace: <user_namespace> 1 featureGates: deployTektonTaskResources: true 2", "apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-installer-run- labels: pipelinerun: windows10-installer-run spec: params: - name: winImageDownloadURL value: <link_to_windows_10_iso> 1 pipelineRef: name: windows10-installer taskRunSpecs: - pipelineTaskName: copy-template taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task status: {}", "oc apply -f windows10-installer-run.yaml", "apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: generateName: windows10-customize-run- labels: pipelinerun: windows10-customize-run spec: params: - name: allowReplaceGoldenTemplate value: true - name: allowReplaceCustomizationTemplate value: true pipelineRef: name: windows10-customize taskRunSpecs: - pipelineTaskName: copy-template-customize taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-customize taskServiceAccountName: modify-vm-template-task - pipelineTaskName: create-vm-from-template taskServiceAccountName: create-vm-from-template-task - pipelineTaskName: wait-for-vmi-status taskServiceAccountName: wait-for-vmi-status-task - pipelineTaskName: create-base-dv taskServiceAccountName: modify-data-object-task - pipelineTaskName: cleanup-vm taskServiceAccountName: cleanup-vm-task - pipelineTaskName: copy-template-golden taskServiceAccountName: copy-template-task - pipelineTaskName: modify-vm-template-golden taskServiceAccountName: modify-vm-template-task status: {}", "oc apply -f windows10-customize-run.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: with-limits spec: running: false template: spec: domain: resources: requests: memory: 128Mi limits: memory: 256Mi 1", "metadata: name: example-vm-node-selector apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: nodeSelector: example-key-1: example-value-1 example-key-2: example-value-2", "metadata: name: example-vm-pod-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchExpressions: - key: example-key-1 operator: In values: - example-value-1 topologyKey: kubernetes.io/hostname podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: example-key-2 operator: In values: - example-value-2 topologyKey: kubernetes.io/hostname", "metadata: name: example-vm-node-affinity apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 1 nodeSelectorTerms: - matchExpressions: - key: example.io/example-key operator: In values: - example-value-1 - example-value-2 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 preference: matchExpressions: - key: example-node-label-key operator: In values: - example-node-label-value", "metadata: name: example-vm-tolerations apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: {}", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration: ksmConfiguration: nodeLabelSelector: matchLabels: <first_example_key>: \"true\" <second_example_key>: \"true\"", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: configuration:", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: certConfig: ca: duration: 48h0m0s renewBefore: 24h0m0s 1 server: duration: 24h0m0s 2 renewBefore: 12h0m0s 3", "certConfig: ca: duration: 4h0m0s renewBefore: 1h0m0s server: duration: 4h0m0s renewBefore: 4h0m0s", "error: hyperconvergeds.hco.kubevirt.io \"kubevirt-hyperconverged\" could not be patched: admission webhook \"validate-hco.kubevirt.io\" denied the request: spec.certConfig: ca.duration is smaller than server.duration", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: defaultCPUModel: \"EPYC\"", "apiversion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: special: vm-secureboot name: vm-secureboot spec: template: metadata: labels: special: vm-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2", "oc create -f <file_name>.yaml", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf 1 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", 2 \"type\": \"bridge\", 3 \"bridge\": \"bridge-interface\", 4 \"macspoofchk\": false, 5 \"vlan\": 100, 6 \"preserveDefaultVlan\": false 7 }", "oc create -f pxe-net-conf.yaml", "interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1", "devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2", "networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf", "oc create -f vmi-pxe-boot.yaml", "virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created", "oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running", "virtctl vnc vmi-pxe-boot", "virtctl console vmi-pxe-boot", "ip addr", "3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff", "kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2", "oc apply -f <virtual_machine>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: features: - name: apic 1 policy: require 2", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: Conroe 1", "apiVersion: kubevirt/v1alpha3 kind: VirtualMachine metadata: name: myvm spec: template: spec: domain: cpu: model: host-model 1", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-fedora spec: running: true template: spec: schedulerName: my-scheduler 1 domain: devices: disks: - name: containerdisk disk: bus: virtio", "oc get pods", "NAME READY STATUS RESTARTS AGE virt-launcher-vm-fedora-dpc87 2/2 Running 0 24m", "oc describe pod virt-launcher-vm-fedora-dpc87", "[...] Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 21m my-scheduler Successfully assigned default/virt-launcher-vm-fedora-dpc87 to node01 [...]", "oc label node <node_name> nvidia.com/gpu.deploy.operands=false 1", "oc describe node <node_name>", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-sandbox-validator-kxwj7 1/1 Terminating 0 9d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d nvidia-vfio-manager-zqtck 1/1 Terminating 0 9d", "oc get pods -n nvidia-gpu-operator", "NAME READY STATUS RESTARTS AGE gpu-operator-59469b8c5c-hw9wj 1/1 Running 0 8d nvidia-sandbox-validator-7hx98 1/1 Running 0 8d nvidia-sandbox-validator-hdb7p 1/1 Running 0 8d nvidia-vfio-manager-7w9fs 1/1 Running 0 8d nvidia-vfio-manager-866pz 1/1 Running 0 8d", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "lspci -nnv | grep -i nvidia", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "variant: openshift version: 4.15.0 metadata: name: 100-worker-vfiopci labels: machineconfiguration.openshift.io/role: worker 1 storage: files: - path: /etc/modprobe.d/vfio.conf mode: 0644 overwrite: true contents: inline: | options vfio-pci ids=10de:1eb8 2 - path: /etc/modules-load.d/vfio-pci.conf 3 mode: 0644 overwrite: true contents: inline: vfio-pci", "butane 100-worker-vfiopci.bu -o 100-worker-vfiopci.yaml", "oc apply -f 100-worker-vfiopci.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 00-worker d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-master-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-container-runtime d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 01-worker-kubelet d3da910bfa9f4b599af4ed7f5ac270d55950a3a1 3.2.0 25h 100-worker-iommu 3.2.0 30s 100-worker-vfiopci-configuration 3.2.0 30s", "lspci -nnk -d 10de:", "04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] Kernel driver in use: vfio-pci Kernel modules: nouveau", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: 1 pciHostDevices: 2 - pciDeviceSelector: \"10DE:1DB6\" 3 resourceName: \"nvidia.com/GV100GL_Tesla_V100\" 4 - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\" - pciDeviceSelector: \"8086:6F54\" resourceName: \"intel.com/qat\" externalResourceProvider: true 5", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 1 pods: 250", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: permittedHostDevices: pciHostDevices: - pciDeviceSelector: \"10DE:1DB6\" resourceName: \"nvidia.com/GV100GL_Tesla_V100\" - pciDeviceSelector: \"10DE:1EB8\" resourceName: \"nvidia.com/TU104GL_Tesla_T4\"", "oc describe node <node_name>", "Capacity: cpu: 64 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 915128Mi hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 131395264Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250 Allocatable: cpu: 63500m devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 devices.kubevirt.io/vhost-net: 110 ephemeral-storage: 863623130526 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 130244288Ki nvidia.com/GV100GL_Tesla_V100 1 nvidia.com/TU104GL_Tesla_T4 1 intel.com/qat: 0 pods: 250", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: hostDevices: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: hostdevices1", "lspci -nnk | grep NVIDIA", "02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1)", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 100-worker-iommu 2 spec: config: ignition: version: 3.2.0 kernelArguments: - intel_iommu=on 3", "oc create -f 100-worker-kernel-arg-iommu.yaml", "oc get MachineConfig", "kind: ClusterPolicy apiVersion: nvidia.com/v1 metadata: name: gpu-cluster-policy spec: operator: defaultRuntime: crio use_ocp_driver_toolkit: true initContainer: {} sandboxWorkloads: enabled: true defaultWorkload: vm-vgpu driver: enabled: false 1 dcgmExporter: {} dcgm: enabled: true daemonsets: {} devicePlugin: {} gfd: {} migManager: enabled: true nodeStatusExporter: enabled: true mig: strategy: single toolkit: enabled: true validator: plugin: env: - name: WITH_WORKLOAD value: \"true\" vgpuManager: enabled: true 2 repository: <vgpu_container_registry> 3 image: <vgpu_image_name> version: nvidia-vgpu-manager vgpuDeviceManager: enabled: false 4 config: name: vgpu-devices-config default: default sandboxDevicePlugin: enabled: false 5 vfioManager: enabled: false 6", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-222 - nvidia-228 - nvidia-105 - nvidia-108", "nvidia-105 nvidia-108 nvidia-217 nvidia-299", "mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-22 - nvidia-223 - nvidia-224", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: - nvidia-231 nodeMediatedDeviceTypes: - mediatedDeviceTypes: - nvidia-233 nodeSelector: kubernetes.io/hostname: node-11.redhat.com permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q - mdevNameSelector: GRID T4-8Q resourceName: nvidia.com/GRID_T4-8Q", "spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - <device_type> nodeMediatedDeviceTypes: 2 - mediatedDeviceTypes: 3 - <device_type> nodeSelector: 4 <node_selector_key>: <node_selector_value>", "oc get USDNODE -o json | jq '.status.allocatable | with_entries(select(.key | startswith(\"nvidia.com/\"))) | with_entries(select(.value != \"0\"))'", "permittedHostDevices: mediatedDevices: - mdevNameSelector: GRID T4-2Q 1 resourceName: nvidia.com/GRID_T4-2Q 2", "oc describe node <node_name>", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "apiVersion: hco.kubevirt.io/v1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: mediatedDevicesConfiguration: mediatedDeviceTypes: 1 - nvidia-231 permittedHostDevices: mediatedDevices: 2 - mdevNameSelector: GRID T4-2Q resourceName: nvidia.com/GRID_T4-2Q", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: domain: devices: gpus: - deviceName: nvidia.com/TU104GL_Tesla_T4 1 name: gpu1 2 - deviceName: nvidia.com/GRID_T4-2Q name: gpu2", "lspci -nnk | grep <device_name>", "apiVersion: kubevirt.io/v1 kind: VirtualMachine spec: template: metadata: annotations: descheduler.alpha.kubernetes.io/evict: \"true\"", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 profiles: - DevPreviewLongLifecycle mode: Predictive 1", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type=json -p='[{\"op\": \"add\", \"path\": \"/spec/tuningPolicy\", \"value\": \"highBurst\"}]'", "oc get kubevirt.kubevirt.io/kubevirt-kubevirt-hyperconverged -n openshift-cnv -o go-template --template='{{range USDconfig, USDvalue := .spec.configuration}} {{if eq USDconfig \"apiConfiguration\" \"webhookConfiguration\" \"controllerConfiguration\" \"handlerConfiguration\"}} {{\"\\n\"}} {{USDconfig}} = {{USDvalue}} {{end}} {{end}} {{\"\\n\"}}", "oc edit hyperconverged kubevirt-hyperconverged -n openshift-cnv", "spec: resourceRequirements: vmiCPUAllocationRatio: 1 1", "virtctl addvolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC> [--persist] [--serial=<label-name>]", "virtctl removevolume <virtual-machine|virtual-machine-instance> --volume-name=<datavolume|PVC>", "oc edit pvc <pvc_name>", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: vm-disk-expand spec: accessModes: - ReadWriteMany resources: requests: storage: 3Gi 1", "apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} storage: resources: requests: storage: <2Gi> 1 storageClassName: \"<storage_class>\" 2", "oc create -f <blank-image-datavolume>.yaml", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: <vm_name> spec: template: spec: domain: devices: disks: - disk: bus: virtio name: rootdisk disk1: disk_one 1 - disk: bus: virtio name: cloudinitdisk disk2: disk_two shareable: true 2 interfaces: - masquerade: {} name: default", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report lun: 1 bus: scsi reservation: true 2 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share", "apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: vm-0 spec: template: spec: domain: devices: disks: - disk: bus: sata name: rootdisk - errorPolicy: report lun: 1 bus: scsi reservation: true 2 name: na-shared serial: shared1234 volumes: - dataVolume: name: vm-0 name: rootdisk - name: na-shared persistentVolumeClaim: claimName: pvc-na-share", "oc patch hyperconverged kubevirt-hyperconverged -n openshift-cnv --type json -p '[{\"op\":\"replace\",\"path\":\"/spec/featureGates/persistentReservation\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/virtualization/virtual-machines
Chapter 5. Using Firewalls
Chapter 5. Using Firewalls 5.1. Getting Started with firewalld A firewall is a way to protect machines from any unwanted traffic from outside. It enables users to control incoming network traffic on host machines by defining a set of firewall rules . These rules are used to sort the incoming traffic and either block it or allow through. firewalld is a firewall service daemon that provides a dynamic customizable host-based firewall with a D-Bus interface. Being dynamic, it enables creating, changing, and deleting the rules without the necessity to restart the firewall daemon each time the rules are changed. firewalld uses the concepts of zones and services , that simplify the traffic management. Zones are predefined sets of rules. Network interfaces and sources can be assigned to a zone. The traffic allowed depends on the network your computer is connected to and the security level this network is assigned. Firewall services are predefined rules that cover all necessary settings to allow incoming traffic for a specific service and they apply within a zone. Services use one or more ports or addresses for network communication. Firewalls filter communication based on ports. To allow network traffic for a service, its ports must be open . firewalld blocks all traffic on ports that are not explicitly set as open. Some zones, such as trusted , allow all traffic by default. Figure 5.1. The Firewall Stack 5.1.1. Zones firewalld can be used to separate networks into different zones according to the level of trust that the user has decided to place on the interfaces and traffic within that network. A connection can only be part of one zone, but a zone can be used for many network connections. NetworkManager notifies firewalld of the zone of an interface. You can assign zones to interfaces with NetworkManager , with the firewall-config tool, or the firewall-cmd command-line tool. The latter two only edit the appropriate NetworkManager configuration files. If you change the zone of the interface using firewall-cmd or firewall-config , the request is forwarded to NetworkManager and is not handled by firewalld . The predefined zones are stored in the /usr/lib/firewalld/zones/ directory and can be instantly applied to any available network interface. These files are copied to the /etc/firewalld/zones/ directory only after they are modified. The following table describes the default settings of the predefined zones: block Any incoming network connections are rejected with an icmp-host-prohibited message for IPv4 and icmp6-adm-prohibited for IPv6 . Only network connections initiated from within the system are possible. dmz For computers in your demilitarized zone that are publicly-accessible with limited access to your internal network. Only selected incoming connections are accepted. drop Any incoming network packets are dropped without any notification. Only outgoing network connections are possible. external For use on external networks with masquerading enabled, especially for routers. You do not trust the other computers on the network to not harm your computer. Only selected incoming connections are accepted. home For use at home when you mostly trust the other computers on the network. Only selected incoming connections are accepted. internal For use on internal networks when you mostly trust the other computers on the network. Only selected incoming connections are accepted. public For use in public areas where you do not trust other computers on the network. Only selected incoming connections are accepted. trusted All network connections are accepted. work For use at work where you mostly trust the other computers on the network. Only selected incoming connections are accepted. One of these zones is set as the default zone. When interface connections are added to NetworkManager , they are assigned to the default zone. On installation, the default zone in firewalld is set to be the public zone. The default zone can be changed. Note The network zone names have been chosen to be self-explanatory and to allow users to quickly make a reasonable decision. To avoid any security problems, review the default zone configuration and disable any unnecessary services according to your needs and risk assessments. 5.1.2. Predefined Services A service can be a list of local ports, protocols, source ports, and destinations, as well as a list of firewall helper modules automatically loaded if a service is enabled. Using services saves users time because they can achieve several tasks, such as opening ports, defining protocols, enabling packet forwarding and more, in a single step, rather than setting up everything one after another. Service configuration options and generic file information are described in the firewalld.service(5) man page. The services are specified by means of individual XML configuration files, which are named in the following format: service-name .xml . Protocol names are preferred over service or application names in firewalld . 5.1.3. Runtime and Permanent Settings Any changes committed in runtime mode only apply while firewalld is running. When firewalld is restarted, the settings revert to their permanent values. To make the changes persistent across reboots, apply them again using the --permanent option. Alternatively, to make changes persistent while firewalld is running, use the --runtime-to-permanent firewall-cmd option. If you set the rules while firewalld is running using only the --permanent option, they do not become effective before firewalld is restarted. However, restarting firewalld closes all open ports and stops the networking traffic. 5.1.4. Modifying Settings in Runtime and Permanent Configuration using CLI Using the CLI, you do not modify the firewall settings in both modes at the same time. You only modify either runtime or permanent mode. To modify the firewall settings in the permanent mode, use the --permanent option with the firewall-cmd command. Without this option, the command modifies runtime mode. To change settings in both modes, you can use two methods: Change runtime settings and then make them permanent as follows: Set permanent settings and reload the settings into runtime mode: The first method allows you to test the settings before you apply them to the permanent mode. Note It is possible, especially on remote systems, that an incorrect setting results in a user locking themselves out of a machine. To prevent such situations, use the --timeout option. After a specified amount of time, any change reverts to its state. Using this options excludes the --permanent option. For example, to add the SSH service for 15 minutes:
[ "~]# firewall-cmd --permanent <other options>", "~]# firewall-cmd <other options> ~]# firewall-cmd --runtime-to-permanent", "~]# firewall-cmd --permanent <other options> ~]# firewall-cmd --reload", "~]# firewall-cmd --add-service=ssh --timeout 15m" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/sec-using_firewalls
1.3. Searches
1.3. Searches 1.3.1. Performing Searches in Red Hat Virtualization The Administration Portal allows you to manage thousands of resources, such as virtual machines, hosts, users, and more. To perform a search, enter the search query (free-text or syntax-based) into the search bar, available on the main page for each resource. Search queries can be saved as bookmarks for future reuse, so you do not have to reenter a search query each time the specific search results are required. Searches are not case sensitive. 1.3.2. Search Syntax and Examples The syntax of the search queries for Red Hat Virtualization resources is as follows: result type: {criteria} [sortby sort_spec] Syntax Examples The following examples describe how the search query is used and help you to understand how Red Hat Virtualization assists with building search queries. Table 1.15. Example Search Queries Example Result Hosts: Vms.status = up page 2 Displays page 2 of a list of all hosts running virtual machines that are up. Vms: domain = qa.company.com Displays a list of all virtual machines running on the specified domain. Vms: users.name = Mary Displays a list of all virtual machines belonging to users with the user name Mary. Events: severity > normal sortby time Displays the list of all Events whose severity is higher than Normal, sorted by time. 1.3.3. Search Auto-Completion The Administration Portal provides auto-completion to help you create valid and powerful search queries. As you type each part of a search query, a drop-down list of choices for the part of the search opens below the Search Bar. You can either select from the list and then continue typing/selecting the part of the search, or ignore the options and continue entering your query manually. The following table specifies by example how the Administration Portal auto-completion assists in constructing a query: Hosts: Vms.status = down Table 1.16. Example Search Queries Using Auto-Completion Input List Items Displayed Action h Hosts (1 option only) Select Hosts or type Hosts Hosts: All host properties Type v Hosts: v host properties starting with a v Select Vms or type Vms Hosts: Vms All virtual machine properties Type s Hosts: Vms.s All virtual machine properties beginning with s Select status or type status Hosts: Vms.status = != Select or type = Hosts: Vms.status = All status values Select or type down 1.3.4. Search Result Type Options The result type allows you to search for resources of any of the following types: Vms for a list of virtual machines Host for a list of hosts Pools for a list of pools Template for a list of templates Events for a list of events Users for a list of users Cluster for a list of clusters DataCenter for a list of data centers Storage for a list of storage domains As each type of resource has a unique set of properties and a set of other resource types that it is associated with, each search type has a set of valid syntax combinations. You can also use the auto-complete feature to create valid queries easily. 1.3.5. Search Criteria You can specify the search criteria after the colon in the query. The syntax of {criteria} is as follows: <prop><operator><value> or <obj-type><prop><operator><value> Examples The following table describes the parts of the syntax: Table 1.17. Example Search Criteria Part Description Values Example Note prop The property of the searched-for resource. Can also be the property of a resource type (see obj-type ), or tag (custom tag). Limit your search to objects with a certain property. For example, search for objects with a status property. Status N/A obj-type A resource type that can be associated with the searched-for resource. These are system objects, like data centers and virtual machines. Users N/A operator Comparison operators. = != (not equal) > < >= <= N/A Value options depend on property. Value What the expression is being compared to. String Integer Ranking Date (formatted according to Regional Settings) Jones 256 normal Wildcards can be used within strings. "" (two sets of quotation marks with no space between them) can be used to represent an un-initialized (empty) string. Double quotes should be used around a string or date containing spaces 1.3.6. Search: Multiple Criteria and Wildcards Wildcards can be used in the <value> part of the syntax for strings. For example, to find all users beginning with m , enter m* . You can perform a search having two criteria by using the Boolean operators AND and OR . For example: Vms: users.name = m* AND status = Up This query returns all running virtual machines for users whose names begin with "m". Vms: users.name = m* AND tag = "paris-loc" This query returns all virtual machines tagged with "paris-loc" for users whose names begin with "m". When two criteria are specified without AND or OR , AND is implied. AND precedes OR , and OR precedes implied AND . 1.3.7. Search: Determining Search Order You can determine the sort order of the returned information by using sortby . Sort direction ( asc for ascending, desc for descending) can be included. For example: events: severity > normal sortby time desc This query returns all Events whose severity is higher than Normal, sorted by time (descending order). 1.3.8. Searching for Data Centers The following table describes all search options for Data Centers. Table 1.18. Searching for Data Centers Property (of resource or resource-type) Type Description (Reference) Clusters. clusters-prop Depends on property type The property of the clusters associated with the data center. name String The name of the data center. description String A description of the data center. type String The type of data center. status List The availability of the data center. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Datacenter: type = nfs and status != up This example returns a list of data centers with a storage type of NFS and status other than up. 1.3.9. Searching for Clusters The following table describes all search options for clusters. Table 1.19. Searching Clusters Property (of resource or resource-type) Type Description (Reference) Datacenter. datacenter-prop Depends on property type The property of the data center associated with the cluster. Datacenter String The data center to which the cluster belongs. name String The unique name that identifies the clusters on the network. description String The description of the cluster. initialized String True or False indicating the status of the cluster. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Clusters: initialized = true or name = Default This example returns a list of clusters which are initialized or named Default. 1.3.10. Searching for Hosts The following table describes all search options for hosts. Table 1.20. Searching for Hosts Property (of resource or resource-type) Type Description (Reference) Vms. Vms-prop Depends on property type The property of the virtual machines associated with the host. Templates. templates-prop Depends on property type The property of the templates associated with the host. Events. events-prop Depends on property type The property of the events associated with the host. Users. users-prop Depends on property type The property of the users associated with the host. name String The name of the host. status List The availability of the host. external_status String The health status of the host as reported by external systems and plug-ins. cluster String The cluster to which the host belongs. address String The unique name that identifies the host on the network. cpu_usage Integer The percent of processing power used. mem_usage Integer The percentage of memory used. network_usage Integer The percentage of network usage. load Integer Jobs waiting to be executed in the run-queue per processor, in a given time slice. version Integer The version number of the operating system. cpus Integer The number of CPUs on the host. memory Integer The amount of memory available. cpu_speed Integer The processing speed of the CPU. cpu_model String The type of CPU. active_vms Integer The number of virtual machines currently running. migrating_vms Integer The number of virtual machines currently being migrated. committed_mem Integer The percentage of committed memory. tag String The tag assigned to the host. type String The type of host. datacenter String The data center to which the host belongs. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Hosts: cluster = Default and Vms.os = rhel6 This example returns a list of hosts which are part of the Default cluster and host virtual machines running the Red Hat Enterprise Linux 6 operating system. 1.3.11. Searching for Networks The following table describes all search options for networks. Table 1.21. Searching for Networks Property (of resource or resource-type) Type Description (Reference) Cluster_network. clusternetwork-prop Depends on property type The property of the cluster associated with the network. Host_Network. hostnetwork-prop Depends on property type The property of the host associated with the network. name String The human readable name that identifies the network. description String Keywords or text describing the network, optionally used when creating the network. vlanid Integer The VLAN ID of the network. stp String Whether Spanning Tree Protocol (STP) is enabled or disabled for the network. mtu Integer The maximum transmission unit for the logical network. vmnetwork String Whether the network is only used for virtual machine traffic. datacenter String The data center to which the network is attached. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Network: mtu > 1500 and vmnetwork = true This example returns a list of networks with a maximum transmission unit greater than 1500 bytes, and which are set up for use by only virtual machines. 1.3.12. Searching for Storage The following table describes all search options for storage. Table 1.22. Searching for Storage Property (of resource or resource-type) Type Description (Reference) Hosts. hosts-prop Depends on property type The property of the hosts associated with the storage. Clusters. clusters-prop Depends on property type The property of the clusters associated with the storage. name String The unique name that identifies the storage on the network. status String The status of the storage domain. external_status String The health status of the storage domain as reported by external systems and plug-ins. datacenter String The data center to which the storage belongs. type String The type of the storage. free-size Integer The size (GB) of the free storage. used-size Integer The amount (GB) of the storage that is used. total_size Integer The total amount (GB) of the storage that is available. committed Integer The amount (GB) of the storage that is committed. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Storage: free_size > 6 GB and total_size < 20 GB This example returns a list of storage with free storage space greater than 6 GB, or total storage space less than 20 GB. 1.3.13. Searching for Disks The following table describes all search options for disks. Note You can use the Disk Type and Content Type filtering options to reduce the number of displayed virtual disks. Table 1.23. Searching for Disks Property (of resource or resource-type) Type Description (Reference) Datacenters. datacenters-prop Depends on property type The property of the data centers associated with the disk. Storages. storages-prop Depends on property type The property of the storage associated with the disk. alias String The human readable name that identifies the storage on the network. description String Keywords or text describing the disk, optionally used when creating the disk. provisioned_size Integer The virtual size of the disk. size Integer The size of the disk. actual_size Integer The actual size allocated to the disk. creation_date Integer The date the disk was created. bootable String Whether the disk can or cannot be booted. Valid values are one of 0 , 1 , yes , or no shareable String Whether the disk can or cannot be attached to more than one virtual machine at a time. Valid values are one of 0 , 1 , yes , or no format String The format of the disk. Can be one of unused , unassigned , cow , or raw . status String The status of the disk. Can be one of unassigned , ok , locked , invalid , or illegal . disk_type String The type of the disk. Can be one of image or lun . number_of_vms Integer The number of virtual machine(s) to which the disk is attached. vm_names String The name(s) of the virtual machine(s) to which the disk is attached. quota String The name of the quota enforced on the virtual disk. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Disks: format = cow and provisioned_size > 8 This example returns a list of virtual disks with QCOW format and an allocated disk size greater than 8 GB. 1.3.14. Searching for Volumes The following table describes all search options for volumes. Table 1.24. Searching for Volumes Property (of resource or resource-type) Type Description (Reference) Cluster String The name of the cluster associated with the volume. Cluster. cluster-prop Depends on property type (examples: name, description, comment, architecture) The property of the clusters associated with the volume. name String The human readable name that identifies the volume. type String Can be one of distribute, replicate, distributed_replicate, stripe, or distributed_stripe. transport_type Integer Can be one of TCP or RDMA. replica_count Integer Number of replica. stripe_count Integer Number of stripes. status String The status of the volume. Can be one of Up or Down. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Volume: transport_type = rdma and stripe_count >= 2 This example returns a list of volumes with transport type set to RDMA, and with 2 or more stripes. 1.3.15. Searching for Virtual Machines The following table describes all search options for virtual machines. Note Currently, the Network Label , Custom Emulated Machine , and Custom CPU Type properties are not supported search parameters. Table 1.25. Searching for Virtual Machines Property (of resource or resource-type) Type Description (Reference) Hosts. hosts-prop Depends on property type The property of the hosts associated with the virtual machine. Templates. templates-prop Depends on property type The property of the templates associated with the virtual machine. Events. events-prop Depends on property type The property of the events associated with the virtual machine. Users. users-prop Depends on property type The property of the users associated with the virtual machine. Storage. storage-prop Depends on the property type The property of storage devices associated with the virtual machine. Vnic. vnic-prop Depends on the property type The property of the vNIC associated with the virtual machine. name String The name of the virtual machine. status List The availability of the virtual machine. ip Integer The IP address of the virtual machine. uptime Integer The number of minutes that the virtual machine has been running. domain String The domain (usually Active Directory domain) that groups these machines. os String The operating system selected when the virtual machine was created. creationdate Date The date on which the virtual machine was created. address String The unique name that identifies the virtual machine on the network. cpu_usage Integer The percent of processing power used. mem_usage Integer The percentage of memory used. network_usage Integer The percentage of network used. memory Integer The maximum memory defined. apps String The applications currently installed on the virtual machine. cluster List The cluster to which the virtual machine belongs. pool List The virtual machine pool to which the virtual machine belongs. loggedinuser String The name of the user currently logged in to the virtual machine. tag List The tags to which the virtual machine belongs. datacenter String The data center to which the virtual machine belongs. type List The virtual machine type (server or desktop). quota String The name of the quota associated with the virtual machine. description String Keywords or text describing the virtual machine, optionally used when creating the virtual machine. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. next_run_configuration_exists Boolean The virtual machine has pending configuration changes. Example Vms: template.name = Win* and user.name = "" This example returns a list of virtual machines whose base template name begins with Win and are assigned to any user. Example Vms: cluster = Default and os = windows7 This example returns a list of virtual machines that belong to the Default cluster and are running Windows 7. 1.3.16. Searching for Pools The following table describes all search options for Pools. Table 1.26. Searching for Pools Property (of resource or resource-type) Type Description (Reference) name String The name of the pool. description String The description of the pool. type List The type of pool. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Pools: type = automatic This example returns a list of pools with a type of automatic . 1.3.17. Searching for Templates The following table describes all search options for templates. Table 1.27. Searching for Templates Property (of resource or resource-type) Type Description (Reference) Vms. Vms-prop String The property of the virtual machines associated with the template. Hosts. hosts-prop String The property of the hosts associated with the template. Events. events-prop String The property of the events associated with the template. Users. users-prop String The property of the users associated with the template. name String The name of the template. domain String The domain of the template. os String The type of operating system. creationdate Integer The date on which the template was created. Date format is mm/dd/yy . childcount Integer The number of virtual machines created from the template. mem Integer Defined memory. description String The description of the template. status String The status of the template. cluster String The cluster associated with the template. datacenter String The data center associated with the template. quota String The quota associated with the template. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Template: Events.severity >= normal and Vms.uptime > 0 This example returns a list of templates where events of normal or greater severity have occurred on virtual machines derived from the template, and the virtual machines are still running. 1.3.18. Searching for Users The following table describes all search options for users. Table 1.28. Searching for Users Property (of resource or resource-type) Type Description (Reference) Vms. Vms-prop Depends on property type The property of the virtual machines associated with the user. Hosts. hosts-prop Depends on property type The property of the hosts associated with the user. Templates. templates-prop Depends on property type The property of the templates associated with the user. Events. events-prop Depends on property type The property of the events associated with the user. name String The name of the user. lastname String The last name of the user. usrname String The unique name of the user. department String The department to which the user belongs. group String The group to which the user belongs. title String The title of the user. status String The status of the user. role String The role of the user. tag String The tag to which the user belongs. pool String The pool to which the user belongs. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Users: Events.severity > normal and Vms.status = up or Vms.status = pause This example returns a list of users where events of greater than normal severity have occurred on their virtual machines AND the virtual machines are still running; or the users' virtual machines are paused. 1.3.19. Searching for Events The following table describes all search options you can use to search for events. Auto-completion is offered for many options as appropriate. Table 1.29. Searching for Events Property (of resource or resource-type) Type Description (Reference) Vms. Vms-prop Depends on property type The property of the virtual machines associated with the event. Hosts. hosts-prop Depends on property type The property of the hosts associated with the event. Templates. templates-prop Depends on property type The property of the templates associated with the event. Users. users-prop Depends on property type The property of the users associated with the event. Clusters. clusters-prop Depends on property type The property of the clusters associated with the event. Volumes. Volumes-prop Depends on property type The property of the volumes associated with the event. type List Type of the event. severity List The severity of the event: Warning/Error/Normal. message String Description of the event type. time List Day the event occurred. usrname String The user name associated with the event. event_host String The host associated with the event. event_vm String The virtual machine associated with the event. event_template String The template associated with the event. event_storage String The storage associated with the event. event_datacenter String The data center associated with the event. event_volume String The volume associated with the event. correlation_id Integer The identification number of the event. sortby List Sorts the returned results by one of the resource properties. page Integer The page number of results to display. Example Events: Vms.name = testdesktop and Hosts.name = gonzo.example.com This example returns a list of events, where the event occurred on the virtual machine named testdesktop while it was running on the host gonzo.example.com .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/chap-searches
function::sprint_ubacktrace
function::sprint_ubacktrace Name function::sprint_ubacktrace - Return stack back trace for current user-space task as string. Synopsis Arguments None Description Returns a simple backtrace for the current task. One line per address. Includes the symbol name (or hex address if symbol couldn't be resolved) and module name (if found). Includes the offset from the start of the function if found, otherwise the offset will be added to the module (if found, between brackets). Returns the backtrace as string (each line terminated by a newline character). Note that the returned stack will be truncated to MAXSTRINGLEN, to print fuller and richer stacks use print_ubacktrace . Equivalent to sprint_ustack( ubacktrace ), but more efficient (no need to translate between hex strings and final backtrace string). Note To get (full) backtraces for user space applications and shared shared libraries not mentioned in the current script run stap with -d /path/to/exe-or-so and/or add --ldd to load all needed unwind data.
[ "sprint_ubacktrace:string()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sprint-ubacktrace
Chapter 12. Booting the Installation on IBM Power Systems
Chapter 12. Booting the Installation on IBM Power Systems To boot an IBM Power Systems server from a DVD, you must specify the install boot device in the System Management Services (SMS) menu. To enter the System Management Services GUI, press the 1 key during the boot process when you hear the chime sound. This brings up a graphical interface similar to the one described in this section. On a text console, press 1 when the self test is displaying the banner along with the tested components: Figure 12.1. The SMS Console Once in the SMS menu, select the option for Select Boot Options . In that menu, specify Select Install or Boot a Device . There, select CD/DVD , and then the bus type (in most cases SCSI). If you are uncertain, you can select to view all devices. This scans all available buses for boot devices, including network adapters and hard drives. Finally, select the device containing the installation DVD. The boot menu will now load. Important Because IBM Power Systems servers primarily use text consoles, Anaconda will not automatically start a graphical installation. However, the graphical installation program offers more features and customization and is recommended if your system has a graphical display. To start a graphical installation, pass the inst.vnc boot option (see Enabling Remote Access ). 12.1. The Boot Menu Once your system has completed loading the boot media, a boot menu is displayed using GRUB2 ( GRand Unified Bootloader , version 2). The boot menu provides several options in addition to launching the installation program. If no key is pressed within 60 seconds, the default boot option (the one highlighted in white) will be run. To choose the default, either wait for the timer to run out or press Enter . Figure 12.2. The Boot Screen To select a different option than the default, use the arrow keys on your keyboard, and press Enter when the correct option is highlighted. To customize the boot options for a particular menu entry, press the e key and add custom boot options to the command line. When ready press Ctrl + X to boot the modified option. See Chapter 23, Boot Options for more information about additional boot options. The boot menu options are: Install Red Hat Enterprise Linux 7.0 Choose this option to install Red Hat Enterprise Linux onto your computer system using the graphical installation program. Test this media & install Red Hat Enterprise Linux 7.0 This option is the default. Prior to starting the installation program, a utility is launched to check the integrity of the installation media. Troubleshooting > This item is a separate menu containing options that help resolve various installation issues. When highlighted, press Enter to display its contents. Figure 12.3. The Troubleshooting Menu Install Red Hat Enterprise Linux 7.0 in basic graphics mode This option allows you to install Red Hat Enterprise Linux in graphical mode even if the installation program is unable to load the correct driver for your video card. If your screen appears distorted or goes blank when using the Install Red Hat Enterprise Linux 7.0 option, restart your computer and try this option instead. Rescue a Red Hat Enterprise Linux system Choose this option to repair a problem with your installed Red Hat Enterprise Linux system that prevents you from booting normally. The rescue environment contains utility programs that allow you fix a wide variety of these problems. Run a memory test This option runs a memory test on your system. For more information, see Section 23.2.1, "Loading the Memory (RAM) Testing Mode" . Boot from local drive This option boots the system from the first installed disk. If you booted this disc accidentally, use this option to boot from the hard disk immediately without starting the installation program.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-booting-installer-ppc
Part IV. Configuring Web Service Endpoints
Part IV. Configuring Web Service Endpoints This guide describes how to create Apache CXF endpoints in Red Hat Fuse.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/cxfdeployguide
Chapter 1. Introduction to the DNS service
Chapter 1. Introduction to the DNS service The DNS service (designate) provides a DNS-as-a-Service implementation for Red Hat OpenStack platform (RHOSP) deployments. This section briefly describes some Domain Name System (DNS) basics, describes the DNS service components, presents a simple use case, and lists various ways to run the DNS service. The topics included in this section are: Section 1.1, "Basics of the Domain Name System (DNS)" Section 1.2, "Introducing the RHOSP DNS service" Section 1.3, "DNS service components" Section 1.4, "A common deployment scenario for the DNS service" Section 1.5, "Different ways to use the DNS service" 1.1. Basics of the Domain Name System (DNS) The Domain Name System (DNS) is a naming system for resources connected to a private or a public network. A hierarchical, distributed database, DNS associates information about resources with domain names that are organized into various groups called zones . Authoritative name servers store resource and zone information in records which can be queried by resolvers to identify and locate resources for routing network data. Names are divided up into a hierarchy of zones which facilitates delegation. Separate name servers are responsible for a particular zone. Figure 1.1. The Domain Name System The root zone, which is simply . (a dot), contains records that delegate various top-level domains (TLDs) to other name servers. These types of records are called name server (NS) records and identify which DNS server is authoritative for a particular domain. It is not uncommon for there to be more than one NS record to indicate a primary and a backup name server for a domain. Beneath the root zone are various TLD name servers that contain records for domains only within their TLD. These are address records and canonical name records and are referred to as A and CNAME records, respectively. For example, the .com name server contains a CNAME record for example.com , in addition to NS records that delegate zones to other name servers. The domain example.com might have its own name server so that it can then create other domains like cloud.example.com . Resolvers are often formed in two parts: a stub resolver which is usually a library on a user's computer, and a recursive resolver that performs queries against name servers before returning the result to the user. When searching for a domain, the resolver starts at the end of the domain and works toward the beginning of the domain. For example, when searching for cloud.example.com , the resolver starts with the root name server . . The root replies with the location of the .com name server. The resolver then contacts the .com name server to get the example.com name server. Finally, the resolver locates the cloud.example.com record and returns it to the user. Figure 1.2. Resolving a DNS query 1 A user queries for the address of cloud.example.com . 2 The recursive resolver queries the root zone name server for cloud.example.com . 3 The record is not found, and the root zone provides the name server for .com . 4 The resolver queries the .com name server for cloud.example.com . 5 The record is not found, and the .com zone provides the name server for example.com . 6 The resolver queries the example.com name server for cloud.example.com . 7 The example.com name server locates cloud.example.com , and provides the A record for cloud.example.com to the resolver. 8 The resolver forwards the A record for cloud.example.com to the user. To make this search more efficient, the results are cached on the resolver, so after the first user has requested cloud.example.com , the resolver can quickly return the cached result for subsequent requests. Additional resources https://en.wikipedia.org/wiki/Domain_Name_System https://tools.ietf.org/html/rfc1034 Section 1.2, "Introducing the RHOSP DNS service" 1.2. Introducing the RHOSP DNS service The Red Hat OpenStack Platform (RHOSP) DNS service (designate) is a multi-tenant service that enables you to manage DNS records, names, and zones. The RHOSP DNS service provides a REST API, and is integrated with the RHOSP Identity service (keystone) for user management. Using RHOSP director you can deploy BIND instances to contain DNS records, or you can integrate the DNS service into an existing BIND infrastructure. In addition, director can configure DNS service integration with the RHOSP Networking service (neutron) to automatically create records for compute instances, network ports, and floating IPs. Additional resources Section 1.1, "Basics of the Domain Name System (DNS)" Section 1.3, "DNS service components" 1.3. DNS service components The Red Hat OpenStack Platform (RHOSP) DNS service (designate) is comprised of several different services that run in containers on one or more RHOSP Controller hosts, by default: Designate API ( designate-api container) Provides the OpenStack standard REST API for users and the RHOSP Networking service (neutron) to interact with designate. The API processes requests by sending them to the Central service over Remote Procedure Call (RPC). Producer ( designate-producer container) Orchestrates periodic tasks that are run by designate. These tasks are long-running and potentially large jobs such as emitting dns.zone.exists for Ceilometer, purging deleted zones from the database, polling secondary zones at their refresh intervals, generating delayed NOTIFY transactions, and invoking a periodic recovery of zones in an error state. Central ( designate-central container) Orchestrates zone and record set creation, update, and deletion. The Central service receives RPC requests sent by the Designate API service and applies the necessary business logic to the data while coordinating its persistent storage. Worker ( designate-worker container) Provides the interface to the drivers for the DNS servers that designate manages. The Worker service reads the server configuration from the designate database, and also manages periodic tasks that are requested by the Producer. Mini DNS ( designate-mdns container) Manages zone authoritative transfer (AXFR) requests from the name servers. The Mini DNS service also pulls DNS information about DNS zones hosted outside of the designate infrastructure. Figure 1.3. The DNS service architecture In RHOSP, by default, the DNS components are BIND 9 and Unbound: BIND 9 ( bind container) Provides a DNS server for the DNS service. BIND is an open source suite of DNS software, and specifically acts as the authoritative nameserver. Unbound ( unbound container) Fulfills the role of the DNS recursive resolver, which initiates and sequences the queries needed to translate DNS requests into an IP address. Unbound is an open source program that the DNS service uses as its recursive resolver. The DNS service uses an oslo compatible database to store data and oslo messaging to facilitate communication between services. Multiple instances of the DNS services can be run in tandem to facilitate high availability deployments, with the API process often located behind load balancers. Additional resources Section 1.4, "A common deployment scenario for the DNS service" Section 1.5, "Different ways to use the DNS service" 1.4. A common deployment scenario for the DNS service A user has created two zones, zone1.cloud.example.com and zone2.cloud.example.com , and the DNS service adds a new Start of Authority (SOA) record and new a name server (NS) record for each new zone, respectively, on the DNS name server. Using the RHOSP Networking service, the user creates a private network and associates it to zone1 and a public network and associates it to zone2 . Finally, the user connects a VM instance to the private network and attaches a floating IP. The user connects a second instance directly to the public network. These connections trigger the Networking service to request the DNS service to create records on behalf of the user. The DNS service maps the instance names to domains on the authoritative name server and also creates PTR records to enable reverse lookups. Figure 1.4. Common DNS service deployment 1 You can associate domains and names with floating IPs, ports, and networks in the RHOSP Networking service. The RHOSP Networking service uses the designate API to manage records when ports are created and destroyed. 2 The designate Worker tells the name server to update its zone information. 3 The name server requests updated zone information from Mini DNS. 4 The name server creates both forward and reverse records. Additional resources Section 1.2, "Introducing the RHOSP DNS service" Section 1.3, "DNS service components" 1.5. Different ways to use the DNS service The Red Hat OpenStack Platform (RHOSP) DNS service (designate) provides a REST API and that is commonly used in three ways. The most common is to use the RHOSP OpenStack client, a python command line tool with commands for interacting with RHOSP services. You can also use the DNS service through a graphical user interface, the RHOSP Dashboard (horizon). Developers can use the OpenStack SDK for writing applications. For more information, see openstacksdk .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_dns_as_a_service/intro-dns-service_rhosp-dnsaas
8.244. transfig
8.244. transfig 8.244.1. RHBA-2014:0483 - transfig bug fix update Updated transfig packages that fix one bug are now available for Red Hat Enterprise Linux 6. The transfig utility creates portable documents that can be printed in a wide variety of environments. The utility converts the FIG files produced by the Xfig editor to other formats by creating a makefile that can translate the FIG files and the figures in the PIC format into a specified LaTeX graphics language, for example PostScript. Bug Fix BZ# 858718 Prior to this update, the PostScript files generated by the transfig utility incorrectly were reported to conform to the PostScript document structuring conventions (DSC). As a consequence, printing from the Xfig editor and printing the PostScript files generated by transfig could result in blank pages. A patch that improves the DSC conformance has been applied to address this bug, and the Xfig drawings are printed as expected in the aforementioned cases. Users of transfig are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/transfig
probe::socket.writev.return
probe::socket.writev.return Name probe::socket.writev.return - Conclusion of message sent via socket_writev Synopsis socket.writev.return Values success Was send successful? (1 = yes, 0 = no) state Socket state value name Name of this probe protocol Protocol value family Protocol family value size Size of message sent (in bytes) or error code if success = 0 type Socket type value flags Socket flags value Context The message receiver. Description Fires at the conclusion of sending a message on a socket via the sock_writev function
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-socket-writev-return
28.2. UUID and Other Persistent Identifiers
28.2. UUID and Other Persistent Identifiers If a storage device contains a file system, then that file system may provide one or both of the following: Universally Unique Identifier (UUID) File system label These identifiers are persistent, and based on metadata written on the device by certain applications. They may also be used to access the device using the symlinks maintained by the operating system in the /dev/disk/by-label/ (e.g. boot -> ../../sda1 ) and /dev/disk/by-uuid/ (e.g. f8bf09e3-4c16-4d91-bd5e-6f62da165c08 -> ../../sda1 ) directories. md and LVM write metadata on the storage device, and read that data when they scan devices. In each case, the metadata contains a UUID, so that the device can be identified regardless of the path (or system) used to access it. As a result, the device names presented by these facilities are persistent, as long as the metadata remains unchanged.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/persistent_naming-uuid_and_others
1.3. system-config-cluster Cluster Administration GUI
1.3. system-config-cluster Cluster Administration GUI This section provides an overview of the cluster administration graphical user interface (GUI) available with Red Hat Cluster Suite - system-config-cluster . It is for use with the cluster infrastructure and the high-availability service management components. system-config-cluster consists of two major functions: the Cluster Configuration Tool and the Cluster Status Tool . The Cluster Configuration Tool provides the capability to create, edit, and propagate the cluster configuration file ( /etc/cluster/cluster.conf ). The Cluster Status Tool provides the capability to manage high-availability services. The following sections summarize those functions. Note While system-config-cluster provides several convenient tools for configuring and managing a Red Hat Cluster, the newer, more comprehensive tool, Conga , provides more convenience and flexibility than system-config-cluster . 1.3.1. Cluster Configuration Tool You can access the Cluster Configuration Tool ( Figure 1.6, " Cluster Configuration Tool " ) through the Cluster Configuration tab in the Cluster Administration GUI. Figure 1.6. Cluster Configuration Tool The Cluster Configuration Tool represents cluster configuration components in the configuration file ( /etc/cluster/cluster.conf ) with a hierarchical graphical display in the left panel. A triangle icon to the left of a component name indicates that the component has one or more subordinate components assigned to it. Clicking the triangle icon expands and collapses the portion of the tree below a component. The components displayed in the GUI are summarized as follows: Cluster Nodes - Displays cluster nodes. Nodes are represented by name as subordinate elements under Cluster Nodes . Using configuration buttons at the bottom of the right frame (below Properties ), you can add nodes, delete nodes, edit node properties, and configure fencing methods for each node. Fence Devices - Displays fence devices. Fence devices are represented as subordinate elements under Fence Devices . Using configuration buttons at the bottom of the right frame (below Properties ), you can add fence devices, delete fence devices, and edit fence-device properties. Fence devices must be defined before you can configure fencing (with the Manage Fencing For This Node button) for each node. Managed Resources - Displays failover domains, resources, and services. Failover Domains - For configuring one or more subsets of cluster nodes used to run a high-availability service in the event of a node failure. Failover domains are represented as subordinate elements under Failover Domains . Using configuration buttons at the bottom of the right frame (below Properties ), you can create failover domains (when Failover Domains is selected) or edit failover domain properties (when a failover domain is selected). Resources - For configuring shared resources to be used by high-availability services. Shared resources consist of file systems, IP addresses, NFS mounts and exports, and user-created scripts that are available to any high-availability service in the cluster. Resources are represented as subordinate elements under Resources . Using configuration buttons at the bottom of the right frame (below Properties ), you can create resources (when Resources is selected) or edit resource properties (when a resource is selected). Note The Cluster Configuration Tool provides the capability to configure private resources, also. A private resource is a resource that is configured for use with only one service. You can configure a private resource within a Service component in the GUI. Services - For creating and configuring high-availability services. A service is configured by assigning resources (shared or private), assigning a failover domain, and defining a recovery policy for the service. Services are represented as subordinate elements under Services . Using configuration buttons at the bottom of the right frame (below Properties ), you can create services (when Services is selected) or edit service properties (when a service is selected).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-clumgmttools-overview-ca
4.144. libsepol
4.144. libsepol 4.144.1. RHBA-2011:1689 - libsepol enhancement update Enhanced libsepol packages are now available for Red Hat Enterprise Linux 6. The libsepol library provides an API for the manipulation of SELinux binary policies. It is used by checkpolicy (the policy compiler) and similar tools, as well as by programs like load_policy that need to perform specific transformations on binary policies (for example, customizing policy boolean settings). Enhancement BZ# 727285 Previously, the libsepol packages were compiled without the RELRO (read-only relocations) flag. As a consequence, programs provided by this package and also programs built against the libsepol libraries were vulnerable to various attacks based on overwriting the ELF section of a program. To increase the security of libsepol programs and libraries, the libsepol spec file has been modified to use the "-Wl,-z,relro" flags when compiling the packages. As a result, the libsepol packages are now provided with partial RELRO protection. Users of libsepol are advised to upgrade to these updated packages, which add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libsepol
Chapter 21. File and Print Servers
Chapter 21. File and Print Servers 21.1. Samba Samba is the standard open source Windows interoperability suite of programs for Linux. It implements the server message block ( SMB ) protocol. Modern versions of this protocol are also known as the common Internet file system ( CIFS ) protocol. It allows the networking of Microsoft Windows (R), Linux, UNIX, and other operating systems together, enabling access to Windows-based file and printer shares. Samba's use of SMB allows it to appear as a Windows server to Windows clients. Note In order to use Samba , first ensure the samba package is installed on your system by running the following command as root : For more information on installing packages with Yum, see Section 8.2.4, "Installing Packages" . 21.1.1. Introduction to Samba Samba is an important component to seamlessly integrate Linux Servers and Desktops into Active Directory (AD) environments. It can function both as a domain controller (NT4-style) or as a regular domain member (AD or NT4-style). What Samba can do: Serve directory trees and printers to Linux, UNIX, and Windows clients Assist in network browsing (with NetBIOS) Authenticate Windows domain logins Provide Windows Internet Name Service ( WINS ) name server resolution Act as a Windows NT (R)-style Primary Domain Controller (PDC) Act as a Backup Domain Controller (BDC) for a Samba-based PDC Act as an Active Directory domain member server Join a Windows NT/2000/2003/2008 PDC What Samba cannot do: Act as a BDC for a Windows PDC (and vice versa) Act as an Active Directory domain controller 21.1.2. Samba Daemons and Related Services Samba is comprised of three daemons ( smbd , nmbd , and winbindd ). Three services ( smb , nmb , and winbind ) control how the daemons are started, stopped, and other service-related features. These services act as different init scripts. Each daemon is listed in detail below, as well as which specific service has control over it. smbd The smbd server daemon provides file sharing and printing services to Windows clients. In addition, it is responsible for user authentication, resource locking, and data sharing through the SMB protocol. The default ports on which the server listens for SMB traffic are TCP ports 139 and 445 . The smbd daemon is controlled by the smb service. nmbd The nmbd server daemon understands and replies to NetBIOS name service requests such as those produced by SMB/CIFS in Windows-based systems. These systems include Windows 95/98/ME, Windows NT, Windows 2000, Windows XP, and LanManager clients. It also participates in the browsing protocols that make up the Windows Network Neighborhood view. The default port that the server listens to for NMB traffic is UDP port 137 . The nmbd daemon is controlled by the nmb service. winbindd The winbind service resolves user and group information received from a server running Windows NT, 2000, 2003, Windows Server 2008, or Windows Server 2012. This makes Windows user and group information understandable by UNIX platforms. This is achieved by using Microsoft RPC calls, Pluggable Authentication Modules (PAM), and the Name Service Switch (NSS). This allows Windows NT domain and Active Directory users to appear and operate as UNIX users on a UNIX machine. Though bundled with the Samba distribution, the winbind service is controlled separately from the smb service. The winbind daemon is controlled by the winbind service and does not require the smb service to be started in order to operate. winbind is also used when Samba is an Active Directory member, and may also be used on a Samba domain controller (to implement nested groups and interdomain trust). Because winbind is a client-side service used to connect to Windows NT-based servers, further discussion of winbind is beyond the scope of this chapter. For information on how to configure winbind for authentication, see Section 13.1.2.3, "Configuring Winbind Authentication" . Note See Section 21.1.11, "Samba Distribution Programs" for a list of utilities included in the Samba distribution. 21.1.3. Connecting to a Samba Share You can use either Nautilus or command line to connect to available Samba shares. Procedure 21.1. Connecting to a Samba Share Using Nautilus To view a list of Samba workgroups and domains on your network, select Places Network from the GNOME panel, and then select the desired network. Alternatively, type smb: in the File Open Location bar of Nautilus . As shown in Figure 21.1, "SMB Workgroups in Nautilus" , an icon appears for each available SMB workgroup or domain on the network. Figure 21.1. SMB Workgroups in Nautilus Double-click one of the workgroup or domain icon to view a list of computers within the workgroup or domain. Figure 21.2. SMB Machines in Nautilus As displayed in Figure 21.2, "SMB Machines in Nautilus" , an icon exists for each machine within the workgroup. Double-click on an icon to view the Samba shares on the machine. If a user name and password combination is required, you are prompted for them. Alternately, you can also specify the Samba server and sharename in the Location: bar for Nautilus using the following syntax (replace servername and sharename with the appropriate values): smb:// servername / sharename Procedure 21.2. Connecting to a Samba Share Using the Command Line To query the network for Samba servers, use the findsmb command. For each server found, it displays its IP address, NetBIOS name, workgroup name, operating system, and SMB server version: findsmb To connect to a Samba share from a shell prompt, type the following command: smbclient // hostname / sharename -U username Replace hostname with the host name or IP address of the Samba server you want to connect to, sharename with the name of the shared directory you want to browse, and username with the Samba user name for the system. Enter the correct password or press Enter if no password is required for the user. If you see the smb:\> prompt, you have successfully logged in. Once you are logged in, type help for a list of commands. If you want to browse the contents of your home directory, replace sharename with your user name. If the -U switch is not used, the user name of the current user is passed to the Samba server. To exit smbclient , type exit at the smb:\> prompt. 21.1.3.1. Mounting the Share Sometimes it is useful to mount a Samba share to a directory so that the files in the directory can be treated as if they are part of the local file system. To mount a Samba share to a directory, create a directory to mount it to (if it does not already exist), and execute the following command as root : mount -t cifs // servername / sharename /mnt/point/ -o username= username ,password= password This command mounts sharename from servername in the local directory /mnt/point/ . For more information about mounting a samba share, see the mount.cifs (8) manual page. Note The mount.cifs utility is a separate RPM (independent from Samba). In order to use mount.cifs , first ensure the cifs-utils package is installed on your system by running the following command as root : For more information on installing packages with Yum, see Section 8.2.4, "Installing Packages" . Note that the cifs-utils package also contains the cifs.upcall binary called by the kernel in order to perform kerberized CIFS mounts. For more information on cifs.upcall , see the cifs.upcall (8) manual page. Warning Some CIFS servers require plain text passwords for authentication. Support for plain text password authentication can be enabled using the following command as root : WARNING: This operation can expose passwords by removing password encryption. 21.1.4. Configuring a Samba Server The default configuration file ( /etc/samba/smb.conf ) allows users to view their home directories as a Samba share. It also shares all printers configured for the system as Samba shared printers. You can attach a printer to the system and print to it from the Windows machines on your network. 21.1.4.1. Graphical Configuration To configure Samba using a graphical interface, use one of the available Samba graphical user interfaces. A list of available GUIs can be found at http://www.samba.org/samba/GUI/ . 21.1.4.2. Command-Line Configuration Samba uses /etc/samba/smb.conf as its configuration file. If you change this configuration file, the changes do not take effect until you restart the Samba daemon with the following command as root : To specify the Windows workgroup and a brief description of the Samba server, edit the following lines in your /etc/samba/smb.conf file: Replace WORKGROUPNAME with the name of the Windows workgroup to which this machine should belong. The BRIEF COMMENT ABOUT SERVER is optional and is used as the Windows comment about the Samba system. To create a Samba share directory on your Linux system, add the following section to your /etc/samba/smb.conf file (after modifying it to reflect your needs and your system): Example 21.1. An Example Configuration of a Samba Server The above example allows the users tfox and carole to read and write to the directory /home/share/ , on the Samba server, from a Samba client. 21.1.4.3. Encrypted Passwords Encrypted passwords are enabled by default because it is more secure to use them. To create a user with an encrypted password, use the smbpasswd utility: smbpasswd -a username 21.1.5. Starting and Stopping Samba To start a Samba server, type the following command in a shell prompt, as root : Important To set up a domain member server, you must first join the domain or Active Directory using the net join command before starting the smb service. Also it is recommended to run winbind before smbd . To stop the server, type the following command in a shell prompt, as root : The restart option is a quick way of stopping and then starting Samba. This is the most reliable way to make configuration changes take effect after editing the configuration file for Samba. Note that the restart option starts the daemon even if it was not running originally. To restart the server, type the following command in a shell prompt, as root : The condrestart ( conditional restart ) option only stops and starts smb on the condition that it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. Note When the /etc/samba/smb.conf file is changed, Samba automatically reloads it after a few minutes. Issuing a manual restart or reload is just as effective. To conditionally restart the server, type the following command as root : A manual reload of the /etc/samba/smb.conf file can be useful in case of a failed automatic reload by the smb service. To ensure that the Samba server configuration file is reloaded without restarting the service, type the following command, as root : By default, the smb service does not start automatically at boot time. To configure Samba to start at boot time, use an initscript utility, such as /sbin/chkconfig , /usr/sbin/ntsysv , or the Services Configuration Tool program. See Chapter 12, Services and Daemons for more information regarding these tools. 21.1.6. Samba Server Types and the smb.conf File Samba configuration is straightforward. All modifications to Samba are done in the /etc/samba/smb.conf configuration file. Although the default smb.conf file is well documented, it does not address complex topics such as LDAP, Active Directory, and the numerous domain controller implementations. The following sections describe the different ways a Samba server can be configured. Keep in mind your needs and the changes required to the /etc/samba/smb.conf file for a successful configuration. 21.1.6.1. Stand-alone Server A stand-alone server can be a workgroup server or a member of a workgroup environment. A stand-alone server is not a domain controller and does not participate in a domain in any way. The following examples include several user-level security configurations. For more information on security modes, see Section 21.1.7, "Samba Security Modes" . Anonymous Read-Only The following /etc/samba/smb.conf file shows a sample configuration needed to implement anonymous read-only file sharing. Two directives are used to configure anonymous access - map to guest = Bad user and guest account = nobody . Example 21.2. An Example Configuration of a Anonymous Read-Only Samba Server Anonymous Read/Write The following /etc/samba/smb.conf file shows a sample configuration needed to implement anonymous read/write file sharing. To enable anonymous read/write file sharing, set the read only directive to no . The force user and force group directives are also added to enforce the ownership of any newly placed files specified in the share. Note Although having an anonymous read/write server is possible, it is not recommended. Any files placed in the share space, regardless of user, are assigned the user/group combination as specified by a generic user ( force user ) and group ( force group ) in the /etc/samba/smb.conf file. Example 21.3. An Example Configuration of a Anonymous Read/Write Samba Server Anonymous Print Server The following /etc/samba/smb.conf file shows a sample configuration needed to implement an anonymous print server. Setting browseable to no as shown does not list the printer in Windows Network Neighborhood . Although hidden from browsing, configuring the printer explicitly is possible. By connecting to DOCS_SRV using NetBIOS, the client can have access to the printer if the client is also part of the DOCS workgroup. It is also assumed that the client has the correct local printer driver installed, as the use client driver directive is set to yes . In this case, the Samba server has no responsibility for sharing printer drivers to the client. Example 21.4. An Example Configuration of a Anonymous Print Samba Server Secure Read/Write File and Print Server The following /etc/samba/smb.conf file shows a sample configuration needed to implement a secure read/write file and print server. Setting the security directive to user forces Samba to authenticate client connections. Notice the [homes] share does not have a force user or force group directive as the [public] share does. The [homes] share uses the authenticated user details for any files created as opposed to the force user and force group in [public] . Example 21.5. An Example Configuration of a Secure Read/Write File and Print Samba Server 21.1.6.2. Domain Member Server A domain member, while similar to a stand-alone server, is logged into a domain controller (either Windows or Samba) and is subject to the domain's security rules. An example of a domain member server would be a departmental server running Samba that has a machine account on the Primary Domain Controller (PDC). All of the department's clients still authenticate with the PDC, and desktop profiles and all network policy files are included. The difference is that the departmental server has the ability to control printer and network shares. Active Directory Domain Member Server To implement an Active Directory domain member server, follow procedure below: Procedure 21.3. Adding a Member Server to an Active Directory Domain Create the /etc/samba/smb.conf configuration file on a member server to be added to the Active Directory domain. Add the following lines to the configuration file: With the above configuration, Samba authenticates users for services being run locally but is also a client of the Active Directory. Ensure that your kerberos realm parameter is shown in all caps (for example realm = EXAMPLE.COM ). Since Windows 2000/2003/2008 requires Kerberos for Active Directory authentication, the realm directive is required. If Active Directory and Kerberos are running on different servers, the password server directive is required to help the distinction. Configure Kerberos on the member server. Create the /etc/krb5.conf configuration file with the following content: Uncomment the [realms] and [domain_realm] sections if DNS lookups are not working. For more information on Kerberos, and the /etc/krb5.conf file, see the Using Kerberos section of the Red Hat Enterprise Linux 6 Managing Single Sign-On and Smart Cards . To join an Active Directory server, type the following command as root on the member server: The net command authenticates as Administrator using the NT LAN Manager (NTLM) protocol and creates the machine account. Then net uses the machine account credentials to authenticate with Kerberos. Note Since security = ads and not security = user is used, a local password back end such as smbpasswd is not needed. Older clients that do not support security = ads are authenticated as if security = domain had been set. This change does not affect functionality and allows local users not previously in the domain. Windows NT4-based Domain Member Server The following /etc/samba/smb.conf file shows a sample configuration needed to implement a Windows NT4-based domain member server. Becoming a member server of an NT4-based domain is similar to connecting to an Active Directory. The main difference is NT4-based domains do not use Kerberos in their authentication method, making the /etc/samba/smb.conf file simpler. In this instance, the Samba member server functions as a pass through to the NT4-based domain server. Example 21.6. An Example Configuration of Samba Windows NT4-based Domain Member Server Having Samba as a domain member server can be useful in many situations. There are times where the Samba server can have other uses besides file and printer sharing. It may be beneficial to make Samba a domain member server in instances where Linux-only applications are required for use in the domain environment. Administrators appreciate keeping track of all machines in the domain, even if not Windows-based. In the event the Windows-based server hardware is deprecated, it is quite easy to modify the /etc/samba/smb.conf file to convert the server to a Samba-based PDC. If Windows NT-based servers are upgraded to Windows 2000/2003/2008 the /etc/samba/smb.conf file is easily modifiable to incorporate the infrastructure change to Active Directory if needed. Important After configuring the /etc/samba/smb.conf file, join the domain before starting Samba by typing the following command as root : Note that the -S option, which specifies the domain server host name, does not need to be stated in the net rpc join command. Samba uses the host name specified by the workgroup directive in the /etc/samba/smb.conf file instead of it being stated explicitly. 21.1.6.3. Domain Controller A domain controller in Windows NT is functionally similar to a Network Information Service (NIS) server in a Linux environment. Domain controllers and NIS servers both host user and group information databases as well as related services. Domain controllers are mainly used for security, including the authentication of users accessing domain resources. The service that maintains the user and group database integrity is called the Security Account Manager (SAM). The SAM database is stored differently between Windows and Linux Samba-based systems, therefore SAM replication cannot be achieved and platforms cannot be mixed in a PDC/BDC environment. In a Samba environment, there can be only one PDC and zero or more BDCs. Important Samba cannot exist in a mixed Samba/Windows domain controller environment (Samba cannot be a BDC of a Windows PDC or vice versa). Alternatively, Samba PDCs and BDCs can coexist. Primary Domain Controller (PDC) Using tdbsam The simplest and most common implementation of a Samba PDC uses the new default tdbsam password database back end. Replacing the aging smbpasswd back end, tdbsam has numerous improvements that are explained in more detail in Section 21.1.8, "Samba Account Information Databases" . The passdb backend directive controls which back end is to be used for the PDC. The following /etc/samba/smb.conf file shows a sample configuration needed to implement a tdbsam password database back end. Example 21.7. An Example Configuration of Primary Domain Controller (PDC) Using tdbsam To provide a functional PDC system which uses tdbsam follow these steps: Adjust the smb.conf configuration file as shown in Example 21.7, "An Example Configuration of Primary Domain Controller (PDC) Using tdbsam " . Add the root user to the Samba password database. You will be prompted to provide a new Samba password for the root user: Start the smb service: Make sure all profile, user, and netlogon directories are created. Add groups that users can be members of: Associate the UNIX groups with their respective Windows groups. Grant access rights to a user or a group. For example, to grant the right to add client machines to the domain on a Samba domain controller, to the members to the Domain Admins group, execute the following command: Keep in mind that Windows systems prefer to have a primary group which is mapped to a domain group such as Domain Users. Windows groups and users use the same namespace thus not allowing the existence of a group and a user with the same name like in UNIX. Note If you need more than one domain controller or have more than 250 users, do not use the tdbsam authentication back end. LDAP is recommended in these cases. Primary Domain Controller (PDC) with Active Directory Although it is possible for Samba to be a member of an Active Directory, it is not possible for Samba to operate as an Active Directory domain controller. 21.1.7. Samba Security Modes There are only two types of security modes for Samba, share-level and user-level , which are collectively known as security levels . Share-level security is deprecated and Red Hat recommends to use user-level security instead. User-level security can be implemented in one of three different ways. The different ways of implementing a security level are called security modes . 21.1.7.1. User-Level Security User-level security is the default and recommended setting for Samba. Even if the security = user directive is not listed in the /etc/samba/smb.conf file, it is used by Samba. If the server accepts the client's user name and password, the client can then mount multiple shares without specifying a password for each instance. Samba can also accept session-based user name and password requests. The client maintains multiple authentication contexts by using a unique UID for each logon. In the /etc/samba/smb.conf file, the security = user directive that sets user-level security is: Samba Guest Shares As mentioned above, share-level security mode is deprecated and highly recommended to not use. To configure a Samba guest share without using the security = share parameter, follow the procedure below: Procedure 21.4. Configuring Samba Guest Shares Create a username map file, in this example /etc/samba/smbusers , and add the following line to it: Add the following directives to the main section in the /etc/samba/smb.conf file. Also, do not use the valid users directive: The username map directive provides a path to the username map file specified in the step. Add the following directive to the share section in the /ect/samba/smb.conf file. Do not use the valid users directive. The following sections describe other implementations of user-level security. Domain Security Mode (User-Level Security) In domain security mode, the Samba server has a machine account (domain security trust account) and causes all authentication requests to be passed through to the domain controllers. The Samba server is made into a domain member server by using the following directives in the /etc/samba/smb.conf file: Active Directory Security Mode (User-Level Security) If you have an Active Directory environment, it is possible to join the domain as a native Active Directory member. Even if a security policy restricts the use of NT-compatible authentication protocols, the Samba server can join an ADS using Kerberos. Samba in Active Directory member mode can accept Kerberos tickets. In the /etc/samba/smb.conf file, the following directives make Samba an Active Directory member server: 21.1.7.2. Share-Level Security With share-level security, the server accepts only a password without an explicit user name from the client. The server expects a password for each share, independent of the user name. There have been recent reports that Microsoft Windows clients have compatibility issues with share-level security servers. This mode is deprecated and Red Hat strongly discourage use of share-level security. Follow steps in Procedure 21.4, "Configuring Samba Guest Shares" instead of using the security = share directive. 21.1.8. Samba Account Information Databases The following is a list different back ends you can use with Samba. Other back ends not listed here may also be available. Plain Text Plain text back ends are nothing more than the /etc/passwd type back ends. With a plain text back end, all user names and passwords are sent unencrypted between the client and the Samba server. This method is very insecure and is not recommended for use by any means. It is possible that different Windows clients connecting to the Samba server with plain text passwords cannot support such an authentication method. smbpasswd The smbpasswd back end utilizes a plain ASCII text layout that includes the MS Windows LanMan and NT account, and encrypted password information. The smbpasswd back end lacks the storage of the Windows NT/2000/2003 SAM extended controls. The smbpasswd back end is not recommended because it does not scale well or hold any Windows information, such as RIDs for NT-based groups. The tdbsam back end solves these issues for use in a smaller database (250 users), but is still not an enterprise-class solution. ldapsam_compat The ldapsam_compat back end allows continued OpenLDAP support for use with upgraded versions of Samba. tdbsam The default tdbsam password back end provides a database back end for local servers, servers that do not need built-in database replication, and servers that do not require the scalability or complexity of LDAP. The tdbsam back end includes all of the smbpasswd database information as well as the previously-excluded SAM information. The inclusion of the extended SAM data allows Samba to implement the same account and system access controls as seen with Windows NT/2000/2003/2008-based systems. The tdbsam back end is recommended for 250 users at most. Larger organizations should require Active Directory or LDAP integration due to scalability and possible network infrastructure concerns. ldapsam The ldapsam back end provides an optimal distributed account installation method for Samba. LDAP is optimal because of its ability to replicate its database to any number of servers such as the Red Hat Directory Server or an OpenLDAP Server . LDAP databases are light-weight and scalable, and as such are preferred by large enterprises. Installation and configuration of directory servers is beyond the scope of this chapter. For more information on the Red Hat Directory Server , see the Red Hat Directory Server 9.0 Deployment Guide . For more information on LDAP, see Section 20.1, "OpenLDAP" . If you are upgrading from a version of Samba to 3.0, note that the OpenLDAP schema file ( /usr/share/doc/samba- version /LDAP/samba.schema ) and the Red Hat Directory Server schema file ( /usr/share/doc/samba- version /LDAP/samba-schema-FDS.ldif ) have changed. These files contain the attribute syntax definitions and objectclass definitions that the ldapsam back end needs in order to function properly. As such, if you are using the ldapsam back end for your Samba server, you will need to configure slapd to include one of these schema file. See Section 20.1.3.3, "Extending Schema" for directions on how to do this. Note You need to have the openldap-servers package installed if you want to use the ldapsam back end. To ensure that the package is installed, execute the following command as roots : 21.1.9. Samba Network Browsing Network browsing enables Windows and Samba servers to appear in the Windows Network Neighborhood . Inside the Network Neighborhood , icons are represented as servers and if opened, the server's shares and printers that are available are displayed. Network browsing capabilities require NetBIOS over TCP / IP . NetBIOS-based networking uses broadcast ( UDP ) messaging to accomplish browse list management. Without NetBIOS and WINS as the primary method for TCP / IP host name resolution, other methods such as static files ( /etc/hosts ) or DNS , must be used. A domain master browser collates the browse lists from local master browsers on all subnets so that browsing can occur between workgroups and subnets. Also, the domain master browser should preferably be the local master browser for its own subnet. 21.1.9.1. Domain Browsing By default, a Windows server PDC for a domain is also the domain master browser for that domain. A Samba server must not be set up as a domain master server in this type of situation. For subnets that do not include the Windows server PDC, a Samba server can be implemented as a local master browser. Configuring the /etc/samba/smb.conf file for a local master browser (or no browsing at all) in a domain controller environment is the same as workgroup configuration (see Section 21.1.4, "Configuring a Samba Server" ). 21.1.9.2. WINS (Windows Internet Name Server) Either a Samba server or a Windows NT server can function as a WINS server. When a WINS server is used with NetBIOS enabled, UDP unicasts can be routed which allows name resolution across networks. Without a WINS server, the UDP broadcast is limited to the local subnet and therefore cannot be routed to other subnets, workgroups, or domains. If WINS replication is necessary, do not use Samba as your primary WINS server, as Samba does not currently support WINS replication. In a mixed NT/2000/2003/2008 server and Samba environment, it is recommended that you use the Microsoft WINS capabilities. In a Samba-only environment, it is recommended that you use only one Samba server for WINS. The following is an example of the /etc/samba/smb.conf file in which the Samba server is serving as a WINS server: Example 21.8. An Example Configuration of WINS Server Note All servers (including Samba) should connect to a WINS server to resolve NetBIOS names. Without WINS, browsing only occurs on the local subnet. Furthermore, even if a domain-wide list is somehow obtained, hosts cannot be resolved for the client without WINS. 21.1.10. Samba with CUPS Printing Support Samba allows client machines to share printers connected to the Samba server. In addition, Samba also allows client machines to send documents built in Linux to Windows printer shares. Although there are other printing systems that function with Red Hat Enterprise Linux, CUPS (Common UNIX Print System) is the recommended printing system due to its close integration with Samba. 21.1.10.1. Simple smb.conf Settings The following example shows a very basic /etc/samba/smb.conf configuration for CUPS support: Example 21.9. An Example Configuration of Samba with CUPS Support Other printing configurations are also possible. To add additional security and privacy for printing confidential documents, users can have their own print spooler not located in a public path. If a job fails, other users would not have access to the file. The printUSD directive contains printer drivers for clients to access if not available locally. The printUSD directive is optional and may not be required depending on the organization. Setting browseable to yes enables the printer to be viewed in the Windows Network Neighborhood, provided the Samba server is set up correctly in the domain or workgroup. 21.1.11. Samba Distribution Programs findsmb findsmb <subnet_broadcast_address> The findsmb program is a Perl script which reports information about SMB -aware systems on a specific subnet. If no subnet is specified the local subnet is used. Items displayed include IP address, NetBIOS name, workgroup or domain name, operating system, and version. The findsmb command is used in the following format: The following example shows the output of executing findsmb as any valid user on a system: net net <protocol> <function> <misc_options> <target_options> The net utility is similar to the net utility used for Windows and MS-DOS. The first argument is used to specify the protocol to use when executing a command. The protocol option can be ads , rap , or rpc for specifying the type of server connection. Active Directory uses ads , Win9x/NT3 uses rap , and Windows NT4/2000/2003/2008 uses rpc . If the protocol is omitted, net automatically tries to determine it. The following example displays a list of the available shares for a host named wakko : The following example displays a list of Samba users for a host named wakko : nmblookup nmblookup <options> <netbios_name> The nmblookup program resolves NetBIOS names into IP addresses. The program broadcasts its query on the local subnet until the target machine replies. The following example displays the IP address of the NetBIOS name trek : pdbedit pdbedit <options> The pdbedit program manages accounts located in the SAM database. All back ends are supported including smbpasswd , LDAP, and the tdb database library. The following are examples of adding, deleting, and listing users: rpcclient rpcclient <server> <options> The rpcclient program issues administrative commands using Microsoft RPCs, which provide access to the Windows administration graphical user interfaces (GUIs) for systems management. This is most often used by advanced users that understand the full complexity of Microsoft RPCs. smbcacls smbcacls <//server/share> <filename> <options> The smbcacls program modifies Windows ACLs on files and directories shared by a Samba server or a Windows server. smbclient smbclient <//server/share> <password> <options> The smbclient program is a versatile UNIX client which provides functionality similar to the ftp utility. smbcontrol smbcontrol -i <options> smbcontrol <options> <destination> <messagetype> <parameters> The smbcontrol program sends control messages to running smbd , nmbd , or winbindd daemons. Executing smbcontrol -i runs commands interactively until a blank line or a 'q' is entered. smbpasswd smbpasswd <options> <username> <password> The smbpasswd program manages encrypted passwords. This program can be run by a superuser to change any user's password and also by an ordinary user to change their own Samba password. smbspool smbspool <job> <user> <title> <copies> <options> <filename> The smbspool program is a CUPS-compatible printing interface to Samba. Although designed for use with CUPS printers, smbspool can work with non-CUPS printers as well. smbstatus smbstatus <options> The smbstatus program displays the status of current connections to a Samba server. smbtar smbtar <options> The smbtar program performs backup and restores of Windows-based share files and directories to a local tape archive. Though similar to the tar utility, the two are not compatible. testparm testparm <options> <filename> <hostname IP_address> The testparm program checks the syntax of the /etc/samba/smb.conf file. If your smb.conf file is in the default location ( /etc/samba/smb.conf ) you do not need to specify the location. Specifying the host name and IP address to the testparm program verifies that the hosts.allow and host.deny files are configured correctly. The testparm program also displays a summary of your smb.conf file and the server's role (stand-alone, domain, etc.) after testing. This is convenient when debugging as it excludes comments and concisely presents information for experienced administrators to read. For example: wbinfo wbinfo <options> The wbinfo program displays information from the winbindd daemon. The winbindd daemon must be running for wbinfo to work. 21.1.12. Additional Resources The following sections give you the means to explore Samba in greater detail. Installed Documentation /usr/share/doc/samba-< version-number >/ - All additional files included with the Samba distribution. This includes all helper scripts, sample configuration files, and documentation. See the following man pages for detailed information specific Samba features: smb.conf (5) samba (7) smbd (8) nmbd (8) winbindd (8) Related Books The Official Samba-3 HOWTO-Collection by John H. Terpstra and Jelmer R. Vernooij; Prentice Hall - The official Samba-3 documentation as issued by the Samba development team. This is more of a reference guide than a step-by-step guide. Samba-3 by Example by John H. Terpstra; Prentice Hall - This is another official release issued by the Samba development team which discusses detailed examples of OpenLDAP, DNS, DHCP, and printing configuration files. This has step-by-step related information that helps in real-world implementations. Using Samba, 2nd Edition by Jay Ts, Robert Eckstein, and David Collier-Brown; O'Reilly - A good resource for novice to advanced users, which includes comprehensive reference material. Useful Websites http://www.samba.org/ - Homepage for the Samba distribution and all official documentation created by the Samba development team. Many resources are available in HTML and PDF formats, while others are only available for purchase. Although many of these links are not Red Hat Enterprise Linux specific, some concepts may apply. http://samba.org/samba/archives.html - Active email lists for the Samba community. Enabling digest mode is recommended due to high levels of list activity. Samba newsgroups - Samba threaded newsgroups, such as www.gmane.org , that use the NNTP protocol are also available. This an alternative to receiving mailing list emails.
[ "~]# yum install samba", "~]# yum install cifs-utils", "~]# echo 0x37 > /proc/fs/cifs/SecurityFlags", "~]# service smb restart", "workgroup = WORKGROUPNAME server string = BRIEF COMMENT ABOUT SERVER", "[ sharename ] comment = Insert a comment here path = /home/share/ valid users = tfox carole writable = yes create mask = 0765", "~]# service smb start", "~]# service smb stop", "~]# service smb restart", "~]# service smb condrestart", "~]# service smb reload", "[global] workgroup = DOCS netbios name = DOCS_SRV security = user guest account = nobody # default value map to guest = Bad user [data] comment = Documentation Samba Server path = /export read only = yes guest ok = yes", "[global] workgroup = DOCS security = user guest account = nobody # default value map to guest = Bad user [data] comment = Data path = /export guest ok = yes writeable = yes force user = user force group = group", "[global] workgroup = DOCS netbios name = DOCS_SRV security = user map to guest = Bad user printing = cups [printers] comment = All Printers path = /var/spool/samba guest ok = yes printable = yes use client driver = yes browseable = yes", "[global] workgroup = DOCS netbios name = DOCS_SRV security = user printcap name = cups disable spools = yes show add printer wizard = no printing = cups [homes] comment = Home Directories valid users = %S read only = no browseable = no [public] comment = Data path = /export force user = docsbot force group = users guest ok = yes [printers] comment = All Printers path = /var/spool/samba printer admin = john, ed, @admins create mask = 0600 guest ok = yes printable = yes use client driver = yes browseable = yes", "[global] realm = EXAMPLE.COM security = ADS encrypt passwords = yes Optional. Use only if Samba cannot determine the Kerberos server automatically. password server = kerberos.example.com", "[logging] default = FILE:/var/log/krb5libs.log [libdefaults] default_realm = AD.EXAMPLE.COM dns_lookup_realm = true dns_lookup_kdc = true ticket_lifetime = 24h renew_lifetime = 7d rdns = false forwardable = false Define only if DNS lookups are not working AD.EXAMPLE.COM = { kdc = server.ad.example.com admin_server = server.ad.example.com master_kdc = server.ad.example.com } Define only if DNS lookups are not working .ad.example.com = AD.EXAMPLE.COM ad.example.com = AD.EXAMPLE.COM", "~]# net ads join -U administrator% password", "[global] workgroup = DOCS netbios name = DOCS_SRV security = domain [homes] comment = Home Directories valid users = %S read only = no browseable = no [public] comment = Data path = /export force user = docsbot force group = users guest ok = yes", "~]# net rpc join -U administrator%password", "[global] workgroup = DOCS netbios name = DOCS_SRV passdb backend = tdbsam security = user add user script = /usr/sbin/useradd -m \"%u\" delete user script = /usr/sbin/userdel -r \"%u\" add group script = /usr/sbin/groupadd \"%g\" delete group script = /usr/sbin/groupdel \"%g\" add user to group script = /usr/sbin/usermod -G \"%g\" \"%u\" add machine script = /usr/sbin/useradd -s /bin/false -d /dev/null -g machines \"%u\" The following specifies the default logon script Per user logon scripts can be specified in the user account using pdbedit logon script = logon.bat This sets the default profile path. Set per user paths with pdbedit logon drive = H: domain logons = yes os level = 35 preferred master = yes domain master = yes [homes] comment = Home Directories valid users = %S read only = no [netlogon] comment = Network Logon Service path = /var/lib/samba/netlogon/scripts browseable = no read only = no For profiles to work, create a user directory under the path shown. mkdir -p /var/lib/samba/profiles/john [Profiles] comment = Roaming Profile Share path = /var/lib/samba/profiles read only = no browseable = no guest ok = yes profile acls = yes Other resource shares ...", "~]# smbpasswd -a root New SMB password:", "~]# service smb start", "~]# groupadd -f users ~]# groupadd -f nobody ~]# groupadd -f ntadmins", "~]# net groupmap add ntgroup=\"Domain Users\" unixgroup=users ~]# net groupmap add ntgroup=\"Domain Guests\" unixgroup=nobody ~]# net groupmap add ntgroup=\"Domain Admins\" unixgroup=ntadmins", "~]# net rpc rights grant 'DOCS\\Domain Admins' SetMachineAccountPrivilege -S PDC -U root", "[GLOBAL] security = user", "nobody = guest", "[GLOBAL] security = user map to guest = Bad User username map = /etc/samba/smbusers", "[SHARE] guest ok = yes", "[GLOBAL] security = domain workgroup = MARKETING", "[GLOBAL] security = ADS realm = EXAMPLE.COM password server = kerberos.example.com", "~]# yum install openldap-servers", "[global] wins support = yes", "[global] load printers = yes printing = cups printcap name = cups [printers] comment = All Printers path = /var/spool/samba browseable = no guest ok = yes writable = no printable = yes printer admin = @ntadmins [printUSD] comment = Printer Drivers Share path = /var/lib/samba/drivers write list = ed, john printer admin = ed, john", "~]USD findsmb IP ADDR NETBIOS NAME WORKGROUP/OS/VERSION ------------------------------------------------------------------ 10.1.59.25 VERVE [MYGROUP] [Unix] [Samba 3.0.0-15] 10.1.59.26 STATION22 [MYGROUP] [Unix] [Samba 3.0.2-7.FC1] 10.1.56.45 TREK +[WORKGROUP] [Windows 5.0] [Windows 2000 LAN Manager] 10.1.57.94 PIXEL [MYGROUP] [Unix] [Samba 3.0.0-15] 10.1.57.137 MOBILE001 [WORKGROUP] [Windows 5.0] [Windows 2000 LAN Manager] 10.1.57.141 JAWS +[KWIKIMART] [Unix] [Samba 2.2.7a-security-rollup-fix] 10.1.56.159 FRED +[MYGROUP] [Unix] [Samba 3.0.0-14.3E] 10.1.59.192 LEGION *[MYGROUP] [Unix] [Samba 2.2.7-security-rollup-fix] 10.1.56.205 NANCYN +[MYGROUP] [Unix] [Samba 2.2.7a-security-rollup-fix]", "~]USD net -l share -S wakko Password: Enumerating shared resources (exports) on remote server: Share name Type Description ---------- ---- ----------- data Disk Wakko data share tmp Disk Wakko tmp share IPCUSD IPC IPC Service (Samba Server) ADMINUSD IPC IPC Service (Samba Server)", "~]USD net -l user -S wakko root password: User name Comment ----------------------------- andriusb Documentation joe Marketing lisa Sales", "~]USD nmblookup trek querying trek on 10.1.59.255 10.1.56.45 trek<00>", "~]USD pdbedit -a kristin new password: retype new password: Unix username: kristin NT username: Account Flags: [U ] User SID: S-1-5-21-1210235352-3804200048-1474496110-2012 Primary Group SID: S-1-5-21-1210235352-3804200048-1474496110-2077 Full Name: Home Directory: \\\\wakko\\kristin HomeDir Drive: Logon Script: Profile Path: \\\\wakko\\kristin\\profile Domain: WAKKO Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: Mon, 18 Jan 2038 22:14:07 GMT Kickoff time: Mon, 18 Jan 2038 22:14:07 GMT Password last set: Thu, 29 Jan 2004 08:29:28 GMT Password can change: Thu, 29 Jan 2004 08:29:28 GMT Password must change: Mon, 18 Jan 2038 22:14:07 GMT ~]USD pdbedit -v -L kristin Unix username: kristin NT username: Account Flags: [U ] User SID: S-1-5-21-1210235352-3804200048-1474496110-2012 Primary Group SID: S-1-5-21-1210235352-3804200048-1474496110-2077 Full Name: Home Directory: \\\\wakko\\kristin HomeDir Drive: Logon Script: Profile Path: \\\\wakko\\kristin\\profile Domain: WAKKO Account desc: Workstations: Munged dial: Logon time: 0 Logoff time: Mon, 18 Jan 2038 22:14:07 GMT Kickoff time: Mon, 18 Jan 2038 22:14:07 GMT Password last set: Thu, 29 Jan 2004 08:29:28 GMT Password can change: Thu, 29 Jan 2004 08:29:28 GMT Password must change: Mon, 18 Jan 2038 22:14:07 GMT ~]USD pdbedit -L andriusb:505: joe:503: lisa:504: kristin:506: ~]USD pdbedit -x joe ~]USD pdbedit -L andriusb:505: lisa:504: kristin:506:", "~]USD testparm Load smb config files from /etc/samba/smb.conf Processing section \"[homes]\" Processing section \"[printers]\" Processing section \"[tmp]\" Processing section \"[html]\" Loaded services file OK. Server role: ROLE_STANDALONE Press enter to see a dump of your service definitions <enter> Global parameters [global] workgroup = MYGROUP server string = Samba Server security = SHARE log file = /var/log/samba/%m.log max log size = 50 socket options = TCP_NODELAY SO_RCVBUF=8192 SO_SNDBUF=8192 dns proxy = no [homes] comment = Home Directories read only = no browseable = no [printers] comment = All Printers path = /var/spool/samba printable = yes browseable = no [tmp] comment = Wakko tmp path = /tmp guest only = yes [html] comment = Wakko www path = /var/www/html force user = andriusb force group = users read only = no guest only = yes" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-file_and_print_servers
4.313. system-config-lvm
4.313. system-config-lvm 4.313.1. RHBA-2011:1710 - system-config-lvm bug fix update An updated system-config-lvm package that fixes one bug is now available for Red Hat Enterprise Linux 6. The system-config-lvm package contains a utility for configuring logical volumes using a graphical user interface. Bug Fix BZ# 722895 The system-config-lvm utility incorrectly left mount information in the /etc/fstab configuration file for a logical volume that had been completely removed from the system. This could have caused the system to enter single-user mode after rebooting because it was unable to mount a logical volume in /etc/fstab that no longer existed. This update ensures that system-config-lvm correctly removes the fstab entry for any logical volume that is removed. All users of system-config-lvm are advised to upgrade to this updated package, which fixes this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/system-config-lvm
Chapter 13. SelfSubjectRulesReview [authorization.k8s.io/v1]
Chapter 13. SelfSubjectRulesReview [authorization.k8s.io/v1] Description SelfSubjectRulesReview enumerates the set of actions the current user can perform within a namespace. The returned list of actions may be incomplete depending on the server's authorization mode, and any errors experienced during the evaluation. SelfSubjectRulesReview should be used by UIs to show/hide actions, or to quickly let an end user reason about their permissions. It should NOT Be used by external systems to drive authorization decisions as this raises confused deputy, cache lifetime/revocation, and correctness concerns. SubjectAccessReview, and LocalAccessReview are the correct way to defer authorization decisions to the API server. Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. status object SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. 13.1.1. .spec Description SelfSubjectRulesReviewSpec defines the specification for SelfSubjectRulesReview. Type object Property Type Description namespace string Namespace to evaluate rules for. Required. 13.1.2. .status Description SubjectRulesReviewStatus contains the result of a rules check. This check can be incomplete depending on the set of authorizers the server is configured with and any errors experienced during evaluation. Because authorization rules are additive, if a rule appears in a list it's safe to assume the subject has that permission, even if that list is incomplete. Type object Required resourceRules nonResourceRules incomplete Property Type Description evaluationError string EvaluationError can appear in combination with Rules. It indicates an error occurred during rule evaluation, such as an authorizer that doesn't support rule evaluation, and that ResourceRules and/or NonResourceRules may be incomplete. incomplete boolean Incomplete is true when the rules returned by this call are incomplete. This is most commonly encountered when an authorizer, such as an external authorizer, doesn't support rules evaluation. nonResourceRules array NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. nonResourceRules[] object NonResourceRule holds information that describes a rule for the non-resource resourceRules array ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. resourceRules[] object ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. 13.1.3. .status.nonResourceRules Description NonResourceRules is the list of actions the subject is allowed to perform on non-resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 13.1.4. .status.nonResourceRules[] Description NonResourceRule holds information that describes a rule for the non-resource Type object Required verbs Property Type Description nonResourceURLs array (string) NonResourceURLs is a set of partial urls that a user should have access to. s are allowed, but only as the full, final step in the path. " " means all. verbs array (string) Verb is a list of kubernetes non-resource API verbs, like: get, post, put, delete, patch, head, options. "*" means all. 13.1.5. .status.resourceRules Description ResourceRules is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type array 13.1.6. .status.resourceRules[] Description ResourceRule is the list of actions the subject is allowed to perform on resources. The list ordering isn't significant, may contain duplicates, and possibly be incomplete. Type object Required verbs Property Type Description apiGroups array (string) APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "*" means all. resourceNames array (string) ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. "*" means all. resources array (string) Resources is a list of resources this rule applies to. " " means all in the specified apiGroups. " /foo" represents the subresource 'foo' for all resources in the specified apiGroups. verbs array (string) Verb is a list of kubernetes resource API verbs, like: get, list, watch, create, update, delete, proxy. "*" means all. 13.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/selfsubjectrulesreviews POST : create a SelfSubjectRulesReview 13.2.1. /apis/authorization.k8s.io/v1/selfsubjectrulesreviews Table 13.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a SelfSubjectRulesReview Table 13.2. Body parameters Parameter Type Description body SelfSubjectRulesReview schema Table 13.3. HTTP responses HTTP code Reponse body 200 - OK SelfSubjectRulesReview schema 201 - Created SelfSubjectRulesReview schema 202 - Accepted SelfSubjectRulesReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authorization_apis/selfsubjectrulesreview-authorization-k8s-io-v1
Chapter 1. Kubernetes overview
Chapter 1. Kubernetes overview Kubernetes is an open source container orchestration tool developed by Google. You can run and manage container-based workloads by using Kubernetes. The most common Kubernetes use case is to deploy an array of interconnected microservices, building an application in a cloud native way. You can create Kubernetes clusters that can span hosts across on-premise, public, private, or hybrid clouds. Traditionally, applications were deployed on top of a single operating system. With virtualization, you can split the physical host into several virtual hosts. Working on virtual instances on shared resources is not optimal for efficiency and scalability. Because a virtual machine (VM) consumes as many resources as a physical machine, providing resources to a VM such as CPU, RAM, and storage can be expensive. Also, you might see your application degrading in performance due to virtual instance usage on shared resources. Figure 1.1. Evolution of container technologies for classical deployments To solve this problem, you can use containerization technologies that segregate applications in a containerized environment. Similar to a VM, a container has its own filesystem, vCPU, memory, process space, dependencies, and more. Containers are decoupled from the underlying infrastructure, and are portable across clouds and OS distributions. Containers are inherently much lighter than a fully-featured OS, and are lightweight isolated processes that run on the operating system kernel. VMs are slower to boot, and are an abstraction of physical hardware. VMs run on a single machine with the help of a hypervisor. You can perform the following actions by using Kubernetes: Sharing resources Orchestrating containers across multiple hosts Installing new hardware configurations Running health checks and self-healing applications Scaling containerized applications 1.1. Kubernetes components Table 1.1. Kubernetes components Component Purpose kube-proxy Runs on every node in the cluster and maintains the network traffic between the Kubernetes resources. kube-controller-manager Governs the state of the cluster. kube-scheduler Allocates pods to nodes. etcd Stores cluster data. kube-apiserver Validates and configures data for the API objects. kubelet Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running. kubectl Allows you to define how you want to run workloads. Use the kubectl command to interact with the kube-apiserver . Node Node is a physical machine or a VM in a Kubernetes cluster. The control plane manages every node and schedules pods across the nodes in the Kubernetes cluster. container runtime container runtime runs containers on a host operating system. You must install a container runtime on each node so that pods can run on the node. Persistent storage Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data. container-registry Stores and accesses the container images. Pod The pod is the smallest logical unit in Kubernetes. A pod contains one or more containers to run in a worker node. 1.2. Kubernetes resources A custom resource is an extension of the Kubernetes API. You can customize Kubernetes clusters by using custom resources. Operators are software extensions which manage applications and their components with the help of custom resources. Kubernetes uses a declarative model when you want a fixed desired result while dealing with cluster resources. By using Operators, Kubernetes defines its states in a declarative way. You can modify the Kubernetes cluster resources by using imperative commands. An Operator acts as a control loop which continuously compares the desired state of resources with the actual state of resources and puts actions in place to bring reality in line with the desired state. Figure 1.2. Kubernetes cluster overview Table 1.2. Kubernetes Resources Resource Purpose Service Kubernetes uses services to expose a running application on a set of pods. ReplicaSets Kubernetes uses the ReplicaSets to maintain the constant pod number. Deployment A resource object that maintains the life cycle of an application. Kubernetes is a core component of an OpenShift Container Platform. You can use OpenShift Container Platform for developing and running containerized applications. With its foundation in Kubernetes, the OpenShift Container Platform incorporates the same technology that serves as the engine for massive telecommunications, streaming video, gaming, banking, and other applications. You can extend your containerized applications beyond a single cloud to on-premise and multi-cloud environments by using the OpenShift Container Platform. Figure 1.3. Architecture of Kubernetes A cluster is a single computational unit consisting of multiple nodes in a cloud environment. A Kubernetes cluster includes a control plane and worker nodes. You can run Kubernetes containers across various machines and environments. The control plane node controls and maintains the state of a cluster. You can run the Kubernetes application by using worker nodes. You can use the Kubernetes namespace to differentiate cluster resources in a cluster. Namespace scoping is applicable for resource objects, such as deployment, service, and pods. You cannot use namespace for cluster-wide resource objects such as storage class, nodes, and persistent volumes. 1.3. Kubernetes conceptual guidelines Before getting started with the OpenShift Container Platform, consider these conceptual guidelines of Kubernetes: Start with one or more worker nodes to run the container workloads. Manage the deployment of those workloads from one or more control plane nodes. Wrap containers in a deployment unit called a pod. By using pods provides extra metadata with the container and offers the ability to group several containers in a single deployment entity. Create special kinds of assets. For example, services are represented by a set of pods and a policy that defines how they are accessed. This policy allows containers to connect to the services that they need even if they do not have the specific IP addresses for the services. Replication controllers are another special asset that indicates how many pod replicas are required to run at a time. You can use this capability to automatically scale your application to adapt to its current demand. The API to OpenShift Container Platform cluster is 100% Kubernetes. Nothing changes between a container running on any other Kubernetes and running on OpenShift Container Platform. No changes to the application. OpenShift Container Platform brings added-value features to provide enterprise-ready enhancements to Kubernetes. OpenShift Container Platform CLI tool ( oc ) is compatible with kubectl . While the Kubernetes API is 100% accessible within OpenShift Container Platform, the kubectl command-line lacks many features that could make it more user-friendly. OpenShift Container Platform offers a set of features and command-line tool like oc . Although Kubernetes excels at managing your applications, it does not specify or manage platform-level requirements or deployment processes. Powerful and flexible platform management tools and processes are important benefits that OpenShift Container Platform offers. You must add authentication, networking, security, monitoring, and logs management to your containerization platform.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/getting_started/kubernetes-overview
function::json_set_prefix
function::json_set_prefix Name function::json_set_prefix - Set the metric prefix. Synopsis Arguments prefix The prefix name to be used. Description This function sets the " prefix " , which is the name of the base of the metric hierarchy. Calling this function is optional, by default the name of the systemtap module is used.
[ "json_set_prefix:long(prefix:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-json-set-prefix
Chapter 4. Modifying a compute machine set
Chapter 4. Modifying a compute machine set You can modify a compute machine set, such as adding labels, changing the instance type, or changing block storage. On Red Hat Virtualization (RHV), you can also change a compute machine set to provision new nodes on a different storage domain. Note If you need to scale a compute machine set without making other changes, see Manually scaling a compute machine set . 4.1. Modifying a compute machine set by using the CLI You can modify the configuration of a compute machine set, and then propagate the changes to the machines in your cluster by using the CLI. By updating the compute machine set configuration, you can enable features or change the properties of the machines it creates. When you modify a compute machine set, your changes only apply to compute machines that are created after you save the updated MachineSet custom resource (CR). The changes do not affect existing machines. Note Changes made in the underlying cloud provider are not reflected in the Machine or MachineSet CRs. To adjust instance configuration in cluster-managed infrastructure, use the cluster-side resources. You can replace the existing machines with new ones that reflect the updated configuration by scaling the compute machine set to create twice the number of replicas and then scaling it down to the original number of replicas. If you need to scale a compute machine set without making other changes, you do not need to delete the machines. Note By default, the OpenShift Container Platform router pods are deployed on compute machines. Because the router is required to access some cluster resources, including the web console, do not scale the compute machine set to 0 unless you first relocate the router pods. Prerequisites Your OpenShift Container Platform cluster uses the Machine API. You are logged in to the cluster as an administrator by using the OpenShift CLI ( oc ). Procedure List the compute machine sets in your cluster by running the following command: USD oc get machinesets.machine.openshift.io -n openshift-machine-api Example output NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m Edit a compute machine set by running the following command: USD oc edit machinesets.machine.openshift.io <machine_set_name> \ -n openshift-machine-api Note the value of the spec.replicas field, as you need it when scaling the machine set to apply the changes. apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1 # ... 1 The examples in this procedure show a compute machine set that has a replicas value of 2 . Update the compute machine set CR with the configuration options that you want and save your changes. List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h For each machine that is managed by the updated compute machine set, set the delete annotation by running the following command: USD oc annotate machine.machine.openshift.io/<machine_name_original_1> \ -n openshift-machine-api \ machine.openshift.io/delete-machine="true" Scale the compute machine set to twice the number of replicas by running the following command: USD oc scale --replicas=4 \ 1 machineset.machine.openshift.io <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 is doubled to 4 . List the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s When the new machines are in the Running phase, you can scale the compute machine set to the original number of replicas. Scale the compute machine set to the original number of replicas by running the following command: USD oc scale --replicas=2 \ 1 machineset.machine.openshift.io <machine_set_name> \ -n openshift-machine-api 1 The original example value of 2 . Verification To verify that a machine created by the updated machine set has the correct configuration, examine the relevant fields in the CR for one of the new machines by running the following command: USD oc describe machine.machine.openshift.io <machine_name_updated_1> \ -n openshift-machine-api To verify that the compute machines without the updated configuration are deleted, list the machines that are managed by the updated compute machine set by running the following command: USD oc get machines.machine.openshift.io \ -n openshift-machine-api \ -l machine.openshift.io/cluster-api-machineset=<machine_set_name> Example output while deletion is in progress for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s Example output when deletion is complete for an AWS cluster NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s Additional resources Lifecycle hooks for the machine deletion phase 4.2. Migrating nodes to a different storage domain on RHV You can migrate the OpenShift Container Platform control plane and compute nodes to a different storage domain in a Red Hat Virtualization (RHV) cluster. 4.2.1. Migrating compute nodes to a different storage domain in RHV Prerequisites You are logged in to the Manager. You have the name of the target storage domain. Procedure Identify the virtual machine template by running the following command: USD oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{"\n"}' machineset -A Create a new virtual machine in the Manager, based on the template you identified. Leave all other settings unchanged. For details, see Creating a Virtual Machine Based on a Template in the Red Hat Virtualization Virtual Machine Management Guide . Tip You do not need to start the new virtual machine. Create a new template from the new virtual machine. Specify the target storage domain under Target . For details, see Creating a Template in the Red Hat Virtualization Virtual Machine Management Guide . Add a new compute machine set to the OpenShift Container Platform cluster with the new template. Get the details of the current compute machine set by running the following command: USD oc get machineset -o yaml Use these details to create a compute machine set. For more information see Creating a compute machine set . Enter the new virtual machine template name in the template_name field. Use the same template name you used in the New template dialog in the Manager. Note the names of both the old and new compute machine sets. You need to refer to them in subsequent steps. Migrate the workloads. Scale up the new compute machine set. For details on manually scaling compute machine sets, see Scaling a compute machine set manually . OpenShift Container Platform moves the pods to an available worker when the old machine is removed. Scale down the old compute machine set. Remove the old compute machine set by running the following command: USD oc delete machineset <machineset-name> Additional resources Creating a compute machine set Scaling a compute machine set manually Controlling pod placement using the scheduler 4.2.2. Migrating control plane nodes to a different storage domain on RHV OpenShift Container Platform does not manage control plane nodes, so they are easier to migrate than compute nodes. You can migrate them like any other virtual machine on Red Hat Virtualization (RHV). Perform this procedure for each node separately. Prerequisites You are logged in to the Manager. You have identified the control plane nodes. They are labeled master in the Manager. Procedure Select the virtual machine labeled master . Shut down the virtual machine. Click the Disks tab. Click the virtual machine's disk. Click More Actions and select Move . Select the target storage domain and wait for the migration process to complete. Start the virtual machine. Verify that the OpenShift Container Platform cluster is stable: USD oc get nodes The output should display the node with the status Ready . Repeat this procedure for each control plane node.
[ "oc get machinesets.machine.openshift.io -n openshift-machine-api", "NAME DESIRED CURRENT READY AVAILABLE AGE <compute_machine_set_name_1> 1 1 1 1 55m <compute_machine_set_name_2> 1 1 1 1 55m", "oc edit machinesets.machine.openshift.io <machine_set_name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machine_set_name> namespace: openshift-machine-api spec: replicas: 2 1", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h", "oc annotate machine.machine.openshift.io/<machine_name_original_1> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=4 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Running m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Provisioned m6i.xlarge us-west-1 us-west-1a 55s <machine_name_updated_2> Provisioning m6i.xlarge us-west-1 us-west-1a 55s", "oc scale --replicas=2 \\ 1 machineset.machine.openshift.io <machine_set_name> -n openshift-machine-api", "oc describe machine.machine.openshift.io <machine_name_updated_1> -n openshift-machine-api", "oc get machines.machine.openshift.io -n openshift-machine-api -l machine.openshift.io/cluster-api-machineset=<machine_set_name>", "NAME PHASE TYPE REGION ZONE AGE <machine_name_original_1> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_original_2> Deleting m6i.xlarge us-west-1 us-west-1a 4h <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 5m41s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 5m41s", "NAME PHASE TYPE REGION ZONE AGE <machine_name_updated_1> Running m6i.xlarge us-west-1 us-west-1a 6m30s <machine_name_updated_2> Running m6i.xlarge us-west-1 us-west-1a 6m30s", "oc get -o jsonpath='{.items[0].spec.template.spec.providerSpec.value.template_name}{\"\\n\"}' machineset -A", "oc get machineset -o yaml", "oc delete machineset <machineset-name>", "oc get nodes" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/machine_management/modifying-machineset
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/preparing_for_disaster_recovery_with_identity_management/proc_providing-feedback-on-red-hat-documentation_preparing-for-disaster-recovery
14.5. Deleting a Snapper Snapshot
14.5. Deleting a Snapper Snapshot To delete a snapshot: You can use the list command to verify that the snapshot was successfully deleted.
[ "snapper -c config_name delete snapshot_number" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/snapper-delete
4.8. Managing Nodes with Fence Devices
4.8. Managing Nodes with Fence Devices You can fence a node manually with the following command. If you specify --off this will use the off API call to stonith which will turn the node off instead of rebooting it. In a situation where no stonith device is able to fence a node even if it is no longer active, the cluster may not be able to recover the resources on the node. If this occurs, after manually ensuring that the node is powered down you can run the following command to confirm to the cluster that the node is powered down and free its resources for recovery. Warning If the node you specify is not actually off, but running the cluster software or services normally controlled by the cluster, data corruption/cluster failure will occur.
[ "pcs stonith fence node [--off]", "pcs stonith confirm node" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-fencedevicemanage-haar
Chapter 39. Modifying Link Loss Behavior
Chapter 39. Modifying Link Loss Behavior This section describes how to modify the link loss behavior of devices that use either fibre channel or iSCSI protocols. 39.1. Fibre Channel If a driver implements the Transport dev_loss_tmo callback, access attempts to a device through a link will be blocked when a transport problem is detected. To verify if a device is blocked, run the following command: This command will return blocked if the device is blocked. If the device is operating normally, this command will return running . Procedure 39.1. Determining The State of a Remote Port To determine the state of a remote port, run the following command: This command will return Blocked when the remote port (along with devices accessed through it) are blocked. If the remote port is operating normally, the command will return Online . If the problem is not resolved within dev_loss_tmo seconds, the rport and devices will be unblocked and all I/O running on that device (along with any new I/O sent to that device) will be failed. Procedure 39.2. Changing dev_loss_tmo To change the dev_loss_tmo value, echo in the desired value to the file. For example, to set dev_loss_tmo to 30 seconds, run: For more information about dev_loss_tmo , refer to Section 26.1, "Fibre Channel API" . When a link or target port loss exceeds dev_loss_tmo , the scsi_device and sd N devices are removed. The target port SCSI ID binding is saved. When the target returns, the SCSI address and sd N assignments may be changed. The SCSI address will change if there has been any LUN configuration changes behind the target port. The sd N names may change depending on timing variations during the LUN discovery process or due to LUN configuration change within storage. These assignments are not persistent as described in Chapter 28, Persistent Naming . Refer to section Chapter 28, Persistent Naming for alternative device naming methods that are persistent.
[ "cat /sys/block/ device /device/state", "cat /sys/class/fc_remote_port/rport- H : B : R /port_state", "echo 30 > /sys/class/fc_remote_port/rport- H : B : R /dev_loss_tmo" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/modifying-link-loss-behavior
2.3. Automatic Kerberos Host Keytab Renewal
2.3. Automatic Kerberos Host Keytab Renewal SSSD automatically renews the Kerberos host keytab file in an AD environment if the adcli package is installed. The daemon checks daily if the machine account password is older than the configured value and renews it if necessary. The default renewal interval is 30 days. To change the default: Add the following parameter to the AD provider in your /etc/sssd/sssd.conf file: Restart SSSD: To disable the automatic Kerberos host keytab renewal, set ad_maximum_machine_account_password_age = 0 .
[ "ad_maximum_machine_account_password_age = value_in_days", "systemctl restart sssd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/sssd-auto-keytab-renewal
6.2. Default Log File Locations
6.2. Default Log File Locations These are the log files that get created for the default logging configurations. The default configuration writes the server log files using periodic log handlers. Table 6.1. Default Log File for a standalone server Log File Description EAP_HOME /standalone/log/server.log Server Log. Contains all server log messages, including server startup messages. EAP_HOME /standalone/log/gc.log Garbage collection log. Contains details of all garbage collection. Table 6.2. Default Log Files for a managed domain Log File Description EAP_HOME /domain/log/host-controller.log Host Controller boot log. Contains log messages related to the startup of the host controller. EAP_HOME /domain/log/process-controller.log Process controller boot log. Contains log messages related to the startup of the process controller. EAP_HOME /domain/servers/ SERVERNAME /log/server.log The server log for the named server. Contains all log messages for that server, including server startup messages.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/default_log_file_locations
Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment
Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 and 4 in a restricted network environment by performing the following procedures: Create a mirrored Operator catalog . This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the Operator on the source cluster. Install the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.18 target cluster by using Operator Lifecycle Manager. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a source cluster or on a remote cluster . Install the legacy Migration Toolkit for Containers Operator on the OpenShift Container Platform 3 source cluster from the command line interface. Configure object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 7.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters using the Velero API to drive migrations. You must use the compatible MTC version for migrating your OpenShift Container Platform clusters. For the migration to succeed, both your source cluster and the destination cluster must use the same version of MTC. MTC 1.7 supports migrations from OpenShift Container Platform 3.11 to 4.17. MTC 1.8 only supports migrations from OpenShift Container Platform 4.14 and later. Table 7.1. MTC compatibility: Migrating from OpenShift Container Platform 3 to 4 Details OpenShift Container Platform 3.11 OpenShift Container Platform 4.14 or later Stable MTC version MTC v.1.7. z MTC v.1.8. z Installation As described in this guide Install with OLM, release channel release-v1.8 Edge cases exist where network restrictions prevent OpenShift Container Platform 4 clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a OpenShift Container Platform 4 cluster in the cloud, the OpenShift Container Platform 4 cluster might have trouble connecting to the OpenShift Container Platform 3.11 cluster. In this case, it is possible to designate the OpenShift Container Platform 3.11 cluster as the control cluster and push workloads to the remote OpenShift Container Platform 4 cluster. 7.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.18 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.18 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must create an Operator catalog from a mirror image in a local registry. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 7.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform 3. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must create an image stream secret and copy it to each node in the cluster. You must have a Linux workstation with network access in order to download files from registry.redhat.io . You must create a mirror image of the Operator catalog. You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.18. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD podman login registry.redhat.io Download the operator.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: podman cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Obtain the Operator image mapping by running the following command: USD grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image. Example output registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file: containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 ... env: - name: REGISTRY value: <registry.apps.example.com> 3 1 2 Specify your mirror registry and the sha256 value of the Operator image. 3 Specify your mirror registry. Log in to your OpenShift Container Platform source cluster. Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 7.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.18, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 7.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 7.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 7.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 7.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 7.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 7.4.2.1. NetworkPolicy configuration 7.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 7.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 7.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 7.4.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 7.4.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 7.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 7.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 7.5. Configuring a replication repository The Multicloud Object Gateway is the only supported option for a restricted network environment. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 7.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 7.5.2. Retrieving Multicloud Object Gateway credentials Note Although the MCG Operator is deprecated , the MCG plugin is still available for OpenShift Data Foundation. To download the plugin, browse to Download Red Hat OpenShift Data Foundation and download the appropriate MCG plugin for your operating system. Prerequisites You must deploy OpenShift Data Foundation by using the appropriate Red Hat OpenShift Data Foundation deployment guide . 7.5.3. Additional resources Procedure Disconnected environment in the Red Hat OpenShift Data Foundation documentation. MTC workflow About data copy methods Adding a replication repository to the MTC web console 7.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero')
[ "podman login registry.redhat.io", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "cp USD(podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/migrating_from_version_3_to_4/installing-restricted-3-4
Chapter 5. Running Red Hat JBoss Data Virtualization
Chapter 5. Running Red Hat JBoss Data Virtualization 5.1. Starting JBoss Data Virtualization You can run JBoss Data Virtualization by starting the JBoss EAP server . To start the JBoss EAP server : Red Hat Enterprise Linux Open a terminal and enter the command: USD EAP_HOME /bin/standalone.sh Microsoft Windows Open a terminal and enter the command: USD EAP_HOME \bin\standalone.bat Note To verify that there have been no errors, check the server log: EAP_HOME / MODE /log/server.log . You can also verify this by opening the Management Console and logging in using the username and password of a registered JBoss EAP Management User.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/chap-starting_and_stopping_the_product
Chapter 9. Lease [coordination.k8s.io/v1]
Chapter 9. Lease [coordination.k8s.io/v1] Description Lease defines a lease concept. Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object LeaseSpec is a specification of a Lease. 9.1.1. .spec Description LeaseSpec is a specification of a Lease. Type object Property Type Description acquireTime MicroTime acquireTime is a time when the current lease was acquired. holderIdentity string holderIdentity contains the identity of the holder of a current lease. If Coordinated Leader Election is used, the holder identity must be equal to the elected LeaseCandidate.metadata.name field. leaseDurationSeconds integer leaseDurationSeconds is a duration that candidates for a lease need to wait to force acquire it. This is measured against the time of last observed renewTime. leaseTransitions integer leaseTransitions is the number of transitions of a lease between holders. preferredHolder string PreferredHolder signals to a lease holder that the lease has a more optimal holder and should be given up. This field can only be set if Strategy is also set. renewTime MicroTime renewTime is a time when the current holder of a lease has last updated the lease. strategy string Strategy indicates the strategy for picking the leader for coordinated leader election. If the field is not specified, there is no active coordination for this lease. (Alpha) Using this field requires the CoordinatedLeaderElection feature gate to be enabled. 9.2. API endpoints The following API endpoints are available: /apis/coordination.k8s.io/v1/leases GET : list or watch objects of kind Lease /apis/coordination.k8s.io/v1/watch/leases GET : watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases DELETE : delete collection of Lease GET : list or watch objects of kind Lease POST : create a Lease /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases GET : watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases/{name} DELETE : delete a Lease GET : read the specified Lease PATCH : partially update the specified Lease PUT : replace the specified Lease /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases/{name} GET : watch changes to an object of kind Lease. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 9.2.1. /apis/coordination.k8s.io/v1/leases HTTP method GET Description list or watch objects of kind Lease Table 9.1. HTTP responses HTTP code Reponse body 200 - OK LeaseList schema 401 - Unauthorized Empty 9.2.2. /apis/coordination.k8s.io/v1/watch/leases HTTP method GET Description watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. Table 9.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.3. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases HTTP method DELETE Description delete collection of Lease Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Lease Table 9.5. HTTP responses HTTP code Reponse body 200 - OK LeaseList schema 401 - Unauthorized Empty HTTP method POST Description create a Lease Table 9.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.7. Body parameters Parameter Type Description body Lease schema Table 9.8. HTTP responses HTTP code Reponse body 200 - OK Lease schema 201 - Created Lease schema 202 - Accepted Lease schema 401 - Unauthorized Empty 9.2.4. /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases HTTP method GET Description watch individual changes to a list of Lease. deprecated: use the 'watch' parameter with a list operation instead. Table 9.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 9.2.5. /apis/coordination.k8s.io/v1/namespaces/{namespace}/leases/{name} Table 9.10. Global path parameters Parameter Type Description name string name of the Lease HTTP method DELETE Description delete a Lease Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Lease Table 9.13. HTTP responses HTTP code Reponse body 200 - OK Lease schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Lease Table 9.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.15. HTTP responses HTTP code Reponse body 200 - OK Lease schema 201 - Created Lease schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Lease Table 9.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.17. Body parameters Parameter Type Description body Lease schema Table 9.18. HTTP responses HTTP code Reponse body 200 - OK Lease schema 201 - Created Lease schema 401 - Unauthorized Empty 9.2.6. /apis/coordination.k8s.io/v1/watch/namespaces/{namespace}/leases/{name} Table 9.19. Global path parameters Parameter Type Description name string name of the Lease HTTP method GET Description watch changes to an object of kind Lease. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 9.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/metadata_apis/lease-coordination-k8s-io-v1
Chapter 72. Kubernetes Event
Chapter 72. Kubernetes Event Since Camel 3.20 Both producer and consumer are supported The Kubernetes Event component is one of the Kubernetes Components which provides a producer to execute Kubernetes Event operations and a consumer to consume events related to Event objects. 72.1. Dependencies When using kubernetes-events with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 72.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 72.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 72.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 72.3. Component Options The Kubernetes Event component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 72.4. Endpoint Options The Kubernetes Event endpoint is configured using URI syntax: with the following path and query parameters: 72.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 72.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 72.5. Message Headers The Kubernetes Event component supports 14 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesEventsLabels (producer) Constant: KUBERNETES_EVENTS_LABELS The event labels. Map CamelKubernetesEventTime (producer) Constant: KUBERNETES_EVENT_TIME The event time in ISO-8601 extended offset date-time format, such as '2011-12-03T10:15:3001:00'. server time String CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventType (producer) Constant: KUBERNETES_EVENT_TYPE The event type. String CamelKubernetesEventReason (producer) Constant: KUBERNETES_EVENT_REASON The event reason. String CamelKubernetesEventNote (producer) Constant: KUBERNETES_EVENT_NOTE The event note. String CamelKubernetesEventRegarding (producer) Constant: KUBERNETES_EVENT_REGARDING The event regarding. ObjectReference CamelKubernetesEventRelated (producer) Constant: KUBERNETES_EVENT_RELATED The event related. ObjectReference CamelKubernetesEventReportingController (producer) Constant: KUBERNETES_EVENT_REPORTING_CONTROLLER The event reporting controller. String CamelKubernetesEventReportingInstance (producer) Constant: KUBERNETES_EVENT_REPORTING_INSTANCE The event reporting instance. String CamelKubernetesEventName (producer) Constant: KUBERNETES_EVENT_NAME The event name. String CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 72.6. Supported producer operation listEvents listEventsByLabels getEvent createEvent updateEvent deleteEvent 72.7. Kubernetes Events Producer Examples listEvents: this operation lists the events. from("direct:list"). to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=listEvents"). to("mock:result"); This operation returns a list of events from your cluster. The type of the events is io.fabric8.kubernetes.api.model.events.v1.Event . To indicate from which namespace the events are expected, it is possible to set the message header CamelKubernetesNamespaceName . By default, the events of all namespaces are returned. listEventsByLabels: this operation lists the events selected by labels. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENTS_LABELS, labels); } }); to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=listEventsByLabels"). to("mock:result"); This operation returns a list of events from your cluster that occurred in any namespaces, using a label selector (in the example above only expect events which have the label "key1" set to "value1" and the label "key2" set to "value2"). The type of the events is io.fabric8.kubernetes.api.model.events.v1.Event . This operation expects the message header CamelKubernetesEventsLabels to be set to a Map<String, String> where the key-value pairs represent the expected label names and values. getEvent: this operation gives a specific event. from("direct:get").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "test"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, "event1"); } }); to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=getEvent"). to("mock:result"); This operation returns the event matching the criteria from your cluster. The type of the event is io.fabric8.kubernetes.api.model.events.v1.Event . This operation expects two message headers which are CamelKubernetesNamespaceName and CamelKubernetesEventName , the first one needs to be set to the name of the target namespace and second one needs to be set to the target name of event. If no matching event could be found, null is returned. createEvent: this operation creates a new event. from("direct:get").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "default"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, "test1"); Map<String, String> labels = new HashMap<>(); labels.put("this", "rocks"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENTS_LABELS, labels); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION_PRODUCER, "Some Action"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_TYPE, "Normal"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REASON, "Some Reason"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REPORTING_CONTROLLER, "Some-Reporting-Controller"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REPORTING_INSTANCE, "Some-Reporting-Instance"); } }); to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=createEvent"). to("mock:result"); This operation publishes a new event in your cluster. An event can be created in two ways either from message headers or directly from an io.fabric8.kubernetes.api.model.events.v1.EventBuilder . Whatever the way used to create the event: The operation expects two message headers which are CamelKubernetesNamespaceName and CamelKubernetesEventName , to set respectively the name of namespace and the name of the produced event. The operation supports the message header CamelKubernetesEventsLabels to set the labels to the produced event. The message headers that can be used to create an event are CamelKubernetesEventTime , CamelKubernetesEventAction , CamelKubernetesEventType , CamelKubernetesEventReason , CamelKubernetesEventNote , CamelKubernetesEventRegarding , CamelKubernetesEventRelated , CamelKubernetesEventReportingController and CamelKubernetesEventReportingInstance . In case the supported message headers are not enough for a specific use case, it is still possible to set the message body with an object of type io.fabric8.kubernetes.api.model.events.v1.EventBuilder representing a prefilled builder to use when creating the event. Please note that the labels, name of event and name of namespace are always set from the message headers, even when the builder is provided. updateEvent: this operation updates an existing event. The behavior is exactly the same as createEvent , only the name of the operation is different. deleteEvent: this operation deletes an existing event. from("direct:get").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, "default"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, "test1"); } }); to("kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=deleteEvent"). to("mock:result"); This operation removes an existing event from your cluster. It returns a boolean to indicate whether the operation was successful or not. This operation expects two message headers which are CamelKubernetesNamespaceName and CamelKubernetesEventName , the first one needs to be set to the name of the target namespace and second one needs to be set to the target name of event. 72.8. Kubernetes Events Consumer Example fromF("kubernetes-events://%s?oauthToken=%s", host, authToken) .setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, constant("default")) .setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, constant("test")) .process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Event cm = exchange.getIn().getBody(Event.class); log.info("Got event with event name: " + cm.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a message per event received on the namespace "default" for the event "test". It also set the action ( io.fabric8.kubernetes.client.Watcher.Action ) in the message header CamelKubernetesEventAction and the timestamp ( long ) in the message header CamelKubernetesEventTimestamp . 72.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-events:masterUrl", "from(\"direct:list\"). to(\"kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=listEvents\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENTS_LABELS, labels); } }); to(\"kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=listEventsByLabels\"). to(\"mock:result\");", "from(\"direct:get\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, \"test\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, \"event1\"); } }); to(\"kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=getEvent\"). to(\"mock:result\");", "from(\"direct:get\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, \"default\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, \"test1\"); Map<String, String> labels = new HashMap<>(); labels.put(\"this\", \"rocks\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENTS_LABELS, labels); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION_PRODUCER, \"Some Action\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_TYPE, \"Normal\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REASON, \"Some Reason\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REPORTING_CONTROLLER, \"Some-Reporting-Controller\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_REPORTING_INSTANCE, \"Some-Reporting-Instance\"); } }); to(\"kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=createEvent\"). to(\"mock:result\");", "from(\"direct:get\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, \"default\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, \"test1\"); } }); to(\"kubernetes-events:///?kubernetesClient=#kubernetesClient&operation=deleteEvent\"). to(\"mock:result\");", "fromF(\"kubernetes-events://%s?oauthToken=%s\", host, authToken) .setHeader(KubernetesConstants.KUBERNETES_NAMESPACE_NAME, constant(\"default\")) .setHeader(KubernetesConstants.KUBERNETES_EVENT_NAME, constant(\"test\")) .process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Event cm = exchange.getIn().getBody(Event.class); log.info(\"Got event with event name: \" + cm.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-event-component-starter
Chapter 12. Pacemaker Cluster Properties
Chapter 12. Pacemaker Cluster Properties Cluster properties control how the cluster behaves when confronted with situations that may occur during cluster operation. Table 12.1, "Cluster Properties" describes the cluster properties options. Section 12.2, "Setting and Removing Cluster Properties" describes how to set cluster properties. Section 12.3, "Querying Cluster Property Settings" describes how to list the currently set cluster properties. 12.1. Summary of Cluster Properties and Options Table 12.1, "Cluster Properties" summaries the Pacemaker cluster properties, showing the default values of the properties and the possible values you can set for those properties. Note In addition to the properties described in this table, there are additional cluster properties that are exposed by the cluster software. For these properties, it is recommended that you not change their values from their defaults. Table 12.1. Cluster Properties Option Default Description batch-limit 0 The number of resource actions that the cluster is allowed to execute in parallel. The "correct" value will depend on the speed and load of your network and cluster nodes. migration-limit -1 (unlimited) The number of migration jobs that the cluster is allowed to execute in parallel on a node. no-quorum-policy stop What to do when the cluster does not have quorum. Allowed values: * ignore - continue all resource management * freeze - continue resource management, but do not recover resources from nodes not in the affected partition * stop - stop all resources in the affected cluster partition * suicide - fence all nodes in the affected cluster partition symmetric-cluster true Indicates whether resources can run on any node by default. stonith-enabled true Indicates that failed nodes and nodes with resources that cannot be stopped should be fenced. Protecting your data requires that you set this true . If true , or unset, the cluster will refuse to start resources unless one or more STONITH resources have been configured also. stonith-action reboot Action to send to STONITH device. Allowed values: reboot , off . The value poweroff is also allowed, but is only used for legacy devices. cluster-delay 60s Round trip delay over the network (excluding action execution). The "correct" value will depend on the speed and load of your network and cluster nodes. stop-orphan-resources true Indicates whether deleted resources should be stopped. stop-orphan-actions true Indicates whether deleted actions should be canceled. start-failure-is-fatal true Indicates whether a failure to start a resource on a particular node prevents further start attempts on that node. When set to false , the cluster will decide whether to try starting on the same node again based on the resource's current failure count and migration threshold. For information on setting the migration-threshold option for a resource, see Section 8.2, "Moving Resources Due to Failure" . Setting start-failure-is-fatal to false incurs the risk that this will allow one faulty node that is unable to start a resource to hold up all dependent actions. This is why start-failure-is-fatal defaults to true . The risk of setting start-failure-is-fatal=false can be mitigated by setting a low migration threshold so that other actions can proceed after that many failures. pe-error-series-max -1 (all) The number of PE inputs resulting in ERRORs to save. Used when reporting problems. pe-warn-series-max -1 (all) The number of PE inputs resulting in WARNINGs to save. Used when reporting problems. pe-input-series-max -1 (all) The number of "normal" PE inputs to save. Used when reporting problems. cluster-infrastructure The messaging stack on which Pacemaker is currently running. Used for informational and diagnostic purposes; not user-configurable. dc-version Version of Pacemaker on the cluster's Designated Controller (DC). Used for diagnostic purposes; not user-configurable. last-lrm-refresh Last refresh of the Local Resource Manager, given in units of seconds since epoca. Used for diagnostic purposes; not user-configurable. cluster-recheck-interval 15 minutes Polling interval for time-based changes to options, resource parameters and constraints. Allowed values: Zero disables polling, positive values are an interval in seconds (unless other SI units are specified, such as 5min). Note that this value is the maximum time between checks; if a cluster event occurs sooner than the time specified by this value, the check will be done sooner. maintenance-mode false Maintenance Mode tells the cluster to go to a "hands off" mode, and not start or stop any services until told otherwise. When maintenance mode is completed, the cluster does a sanity check of the current state of any services, and then stops or starts any that need it. shutdown-escalation 20min The time after which to give up trying to shut down gracefully and just exit. Advanced use only. stonith-timeout 60s How long to wait for a STONITH action to complete. stop-all-resources false Should the cluster stop all resources. enable-acl false (Red Hat Enterprise Linux 7.1 and later) Indicates whether the cluster can use access control lists, as set with the pcs acl command. placement-strategy default Indicates whether and how the cluster will take utilization attributes into account when determining resource placement on cluster nodes. For information on utilization attributes and placement strategies, see Section 9.6, "Utilization and Placement Strategy" . fence-reaction stop (Red Hat Enterprise Linux 7.8 and later) Determines how a cluster node should react if notified of its own fencing. A cluster node may receive notification of its own fencing if fencing is misconfigured, or if fabric fencing is in use that does not cut cluster communication. Allowed values are stop to attempt to immediately stop Pacemaker and stay stopped, or panic to attempt to immediately reboot the local node, falling back to stop on failure.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-clusteropts-HAAR
Chapter 1. Introduction to Red Hat Hybrid Cloud Console notifications
Chapter 1. Introduction to Red Hat Hybrid Cloud Console notifications Through the notifications service, Red Hat Hybrid Cloud Console services have a standardized way of notifying users of events. By setting up behavior groups, a Notifications administrator specifies the notification delivery method and whether event notifications are sent to all users on an account, specific users, or only to Organization Administrators. For example, the Notifications administrator can configure the service to send an email notification for new-recommendation hits on a system. Similarly, the Notifications administrator might decide to trigger a notification that sends a message to a third-party application using the webhook integration type. An Organization Administrator designates Notifications administrators by creating a User Access group with the Notifications administrator role, then adding account members to the group. A Notifications administrator then configures notification behavior groups that define actions taken when service-specific events occur. The notifications service transmits event-triggered notifications to users' email accounts or to third-party applications using webhooks. Users on the Hybrid Cloud Console account set their own preferences for receiving email notifications. In Settings > Notifications > Notification preferences , each user configures their personal settings to receive event notification emails as an instant notification or daily digest. Important Selecting Instant notification for any service can cause the recipient to receive a very large number of emails. 1.1. Hybrid Cloud Console notification service concepts Review key concepts to understand how the notifications service works: Table 1.1. Notifications concepts Concept Description Actions Operations are performed in response to an event,for example sending an email. Actions are defined in behavior groups that are configured by a Notifications administrator. Application bundle Application bundle refers to an application group within the Hybrid Cloud Console, such as Red Hat Enterprise Linux or OpenShift. Behavior groups Behavior groups determine what actions to take when an event occurs, and whether to notify all account users or only designated administrators. After a Notifications administrator creates a behavior group, they associate it with event types which enables Notifications administrators to apply the same actions to all application-specific events. NOTE: Notifications administrators configure notification behavior groups separately for each application bundle. Email preferences Individual users with access to applications on the Hybrid Cloud Console set their personal email preferences. Users can configure personal email notifications to arrive either instantly, as the event occurs, or consolidated into a daily digest that arrives at midnight, 00:00 Coordinated Universal Time (UTC), for all accounts. IMPORTANT: Selecting instant notification for any service can potentially result in the recipient receiving a very large number of emails. Event type Event types are application-specific system changes that trigger the application or service to initiate notification actions. Event types are created by application developers at Red Hat and are unique for each application bundle. Integrations Integrations define the method of delivery of notifications configured by the Notifications administrator. After integrations are configured, the notifications service sends the HTTP POST messages to endpoints. User access roles The following User Access roles interact with notifications: * Organization Administrator * Notifications administrator * Notifications viewer 1.2. Hybrid Cloud Console notifications methods You can use the following methods to integrate the Hybrid Cloud Console into your organization's workflows: Hybrid Cloud Console APIs Webhooks or emails, or both, directly to users Integrations with a third-party application, such as Splunk Hybrid Cloud Console APIs Hybrid Cloud Console APIs are publicly available and can be queried from any authenticated client (role-based access controlled). Webhooks Webhooks work in a similar way to APIs, except that they enable one-way data sharing when events trigger them. APIs share data in both directions. Third-party applications can be configured to allow inbound data requests by exposing webhooks and using them to listen for incoming events. The Hybrid Cloud Console integrations service uses this functionality to send events and associated data from each service. You can configure the Hybrid Cloud Console notifications service to send POST messages to those third-party application webhook endpoints. For example, you can configure the Hybrid Cloud Console to automatically forward events triggered when a new Advisor recommendation is found. The event and its data are sent as an HTTP POST message to the third-party application on its incoming webhook endpoint. After you configure the endpoints in the notifications service, you can subscribe to a stream of Hybrid Cloud Console events and automatically forward that stream to the webhooks of your choice. Each event contains additional metadata, which you can use to process the event, for example, to perform specific actions or trigger responses, as part of your operational workflow. You configure the implementation and data handling within your application. Third-party application integrations You can use Hybrid Cloud Console third-party application integrations in two ways, depending on your use case: Use Hybrid Cloud Console APIs to collect data and perform tasks. Subscribe to streams of Hybrid Cloud Console events. You can use Hybrid Cloud Console integrations to forward events to specific third-party applications. The Red Hat Insights application for Splunk forwards selected Hybrid Cloud Console events to Splunk. This allows you to view and use data from Hybrid Cloud Console in your existing workflows from the Red Hat Insights application for Splunk dashboard. Additional resources For more information about the available endpoints for applications and services, refer to the Hybrid Cloud Console API documentation . For an example of CSV-formatted API responses, see the System Comparison API Documentation . For examples to help you to get started quickly with authentication and with querying API endpoints, see Red Hat Insights API cheat sheet . For more information about how to configure and use webhooks, refer to Configure integrations . For information about security, see Red Hat Insights Data and Application Security . For more information about integrating third-party applications, see Integrating the Red Hat Hybrid Cloud Console with third-party applications .
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_notifications_on_the_red_hat_hybrid_cloud_console/assembly-intro_notifications
Chapter 31. Load balancing with MetalLB
Chapter 31. Load balancing with MetalLB 31.1. About MetalLB and the MetalLB Operator As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add an external IP address for the service. The external IP address is added to the host network for your cluster. 31.1.1. When to use MetalLB Using MetalLB is valuable when you have a bare-metal cluster, or an infrastructure that is like bare metal, and you want fault-tolerant access to an application through an external IP address. You must configure your networking infrastructure to ensure that network traffic for the external IP address is routed from clients to the host network for the cluster. After deploying MetalLB with the MetalLB Operator, when you add a service of type LoadBalancer , MetalLB provides a platform-native load balancer. MetalLB operating in layer2 mode provides support for failover by utilizing a mechanism similar to IP failover. However, instead of relying on the virtual router redundancy protocol (VRRP) and keepalived, MetalLB leverages a gossip-based protocol to identify instances of node failure. When a failover is detected, another node assumes the role of the leader node, and a gratuitous ARP message is dispatched to broadcast this change. MetalLB operating in layer3 or border gateway protocol (BGP) mode delegates failure detection to the network. The BGP router or routers that the OpenShift Container Platform nodes have established a connection with will identify any node failure and terminate the routes to that node. Using MetalLB instead of IP failover is preferable for ensuring high availability of pods and services. 31.1.2. MetalLB Operator custom resources The MetalLB Operator monitors its own namespace for the following custom resources: MetalLB When you add a MetalLB custom resource to the cluster, the MetalLB Operator deploys MetalLB on the cluster. The Operator only supports a single instance of the custom resource. If the instance is deleted, the Operator removes MetalLB from the cluster. IPAddressPool MetalLB requires one or more pools of IP addresses that it can assign to a service when you add a service of type LoadBalancer . An IPAddressPool includes a list of IP addresses. The list can be a single IP address that is set using a range, such as 1.1.1.1-1.1.1.1, a range specified in CIDR notation, a range specified as a starting and ending address separated by a hyphen, or a combination of the three. An IPAddressPool requires a name. The documentation uses names like doc-example , doc-example-reserved , and doc-example-ipv6 . An IPAddressPool assigns IP addresses from the pool. L2Advertisement and BGPAdvertisement custom resources enable the advertisement of a given IP from a given pool. Note A single IPAddressPool can be referenced by a L2 advertisement and a BGP advertisement. BGPPeer The BGP peer custom resource identifies the BGP router for MetalLB to communicate with, the AS number of the router, the AS number for MetalLB, and customizations for route advertisement. MetalLB advertises the routes for service load-balancer IP addresses to one or more BGP peers. BFDProfile The BFD profile custom resource configures Bidirectional Forwarding Detection (BFD) for a BGP peer. BFD provides faster path failure detection than BGP alone provides. L2Advertisement The L2Advertisement custom resource advertises an IP coming from an IPAddressPool using the L2 protocol. BGPAdvertisement The BGPAdvertisement custom resource advertises an IP coming from an IPAddressPool using the BGP protocol. After you add the MetalLB custom resource to the cluster and the Operator deploys MetalLB, the controller and speaker MetalLB software components begin running. MetalLB validates all relevant custom resources. 31.1.3. MetalLB software components When you install the MetalLB Operator, the metallb-operator-controller-manager deployment starts a pod. The pod is the implementation of the Operator. The pod monitors for changes to all the relevant resources. When the Operator starts an instance of MetalLB, it starts a controller deployment and a speaker daemon set. Note You can configure deployment specifications in the MetalLB custom resource to manage how controller and speaker pods deploy and run in your cluster. For more information about these deployment specifications, see the Additional resources section. controller The Operator starts the deployment and a single pod. When you add a service of type LoadBalancer , Kubernetes uses the controller to allocate an IP address from an address pool. In case of a service failure, verify you have the following entry in your controller pod logs: Example output "event":"ipAllocated","ip":"172.22.0.201","msg":"IP address assigned by controller speaker The Operator starts a daemon set for speaker pods. By default, a pod is started on each node in your cluster. You can limit the pods to specific nodes by specifying a node selector in the MetalLB custom resource when you start MetalLB. If the controller allocated the IP address to the service and service is still unavailable, read the speaker pod logs. If the speaker pod is unavailable, run the oc describe pod -n command. For layer 2 mode, after the controller allocates an IP address for the service, the speaker pods use an algorithm to determine which speaker pod on which node will announce the load balancer IP address. The algorithm involves hashing the node name and the load balancer IP address. For more information, see "MetalLB and external traffic policy". The speaker uses Address Resolution Protocol (ARP) to announce IPv4 addresses and Neighbor Discovery Protocol (NDP) to announce IPv6 addresses. For Border Gateway Protocol (BGP) mode, after the controller allocates an IP address for the service, each speaker pod advertises the load balancer IP address with its BGP peers. You can configure which nodes start BGP sessions with BGP peers. Requests for the load balancer IP address are routed to the node with the speaker that announces the IP address. After the node receives the packets, the service proxy routes the packets to an endpoint for the service. The endpoint can be on the same node in the optimal case, or it can be on another node. The service proxy chooses an endpoint each time a connection is established. 31.1.4. MetalLB and external traffic policy With layer 2 mode, one node in your cluster receives all the traffic for the service IP address. With BGP mode, a router on the host network opens a connection to one of the nodes in the cluster for a new client connection. How your cluster handles the traffic after it enters the node is affected by the external traffic policy. cluster This is the default value for spec.externalTrafficPolicy . With the cluster traffic policy, after the node receives the traffic, the service proxy distributes the traffic to all the pods in your service. This policy provides uniform traffic distribution across the pods, but it obscures the client IP address and it can appear to the application in your pods that the traffic originates from the node rather than the client. local With the local traffic policy, after the node receives the traffic, the service proxy only sends traffic to the pods on the same node. For example, if the speaker pod on node A announces the external service IP, then all traffic is sent to node A. After the traffic enters node A, the service proxy only sends traffic to pods for the service that are also on node A. Pods for the service that are on additional nodes do not receive any traffic from node A. Pods for the service on additional nodes act as replicas in case failover is needed. This policy does not affect the client IP address. Application pods can determine the client IP address from the incoming connections. Note The following information is important when configuring the external traffic policy in BGP mode. Although MetalLB advertises the load balancer IP address from all the eligible nodes, the number of nodes loadbalancing the service can be limited by the capacity of the router to establish equal-cost multipath (ECMP) routes. If the number of nodes advertising the IP is greater than the ECMP group limit of the router, the router will use less nodes than the ones advertising the IP. For example, if the external traffic policy is set to local and the router has an ECMP group limit set to 16 and the pods implementing a LoadBalancer service are deployed on 30 nodes, this would result in pods deployed on 14 nodes not receiving any traffic. In this situation, it would be preferable to set the external traffic policy for the service to cluster . 31.1.5. MetalLB concepts for layer 2 mode In layer 2 mode, the speaker pod on one node announces the external IP address for a service to the host network. From a network perspective, the node appears to have multiple IP addresses assigned to a network interface. Note In layer 2 mode, MetalLB relies on ARP and NDP. These protocols implement local address resolution within a specific subnet. In this context, the client must be able to reach the VIP assigned by MetalLB that exists on the same subnet as the nodes announcing the service in order for MetalLB to work. The speaker pod responds to ARP requests for IPv4 services and NDP requests for IPv6. In layer 2 mode, all traffic for a service IP address is routed through one node. After traffic enters the node, the service proxy for the CNI network provider distributes the traffic to all the pods for the service. Because all traffic for a service enters through a single node in layer 2 mode, in a strict sense, MetalLB does not implement a load balancer for layer 2. Rather, MetalLB implements a failover mechanism for layer 2 so that when a speaker pod becomes unavailable, a speaker pod on a different node can announce the service IP address. When a node becomes unavailable, failover is automatic. The speaker pods on the other nodes detect that a node is unavailable and a new speaker pod and node take ownership of the service IP address from the failed node. The preceding graphic shows the following concepts related to MetalLB: An application is available through a service that has a cluster IP on the 172.130.0.0/16 subnet. That IP address is accessible from inside the cluster. The service also has an external IP address that MetalLB assigned to the service, 192.168.100.200 . Nodes 1 and 3 have a pod for the application. The speaker daemon set runs a pod on each node. The MetalLB Operator starts these pods. Each speaker pod is a host-networked pod. The IP address for the pod is identical to the IP address for the node on the host network. The speaker pod on node 1 uses ARP to announce the external IP address for the service, 192.168.100.200 . The speaker pod that announces the external IP address must be on the same node as an endpoint for the service and the endpoint must be in the Ready condition. Client traffic is routed to the host network and connects to the 192.168.100.200 IP address. After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service. If the external traffic policy for the service is set to cluster , the node that advertises the 192.168.100.200 load balancer IP address is selected from the nodes where a speaker pod is running. Only that node can receive traffic for the service. If the external traffic policy for the service is set to local , the node that advertises the 192.168.100.200 load balancer IP address is selected from the nodes where a speaker pod is running and at least an endpoint of the service. Only that node can receive traffic for the service. In the preceding graphic, either node 1 or 3 would advertise 192.168.100.200 . If node 1 becomes unavailable, the external IP address fails over to another node. On another node that has an instance of the application pod and service endpoint, the speaker pod begins to announce the external IP address, 192.168.100.200 and the new node receives the client traffic. In the diagram, the only candidate is node 3. 31.1.6. MetalLB concepts for BGP mode In BGP mode, by default each speaker pod advertises the load balancer IP address for a service to each BGP peer. It is also possible to advertise the IPs coming from a given pool to a specific set of peers by adding an optional list of BGP peers. BGP peers are commonly network routers that are configured to use the BGP protocol. When a router receives traffic for the load balancer IP address, the router picks one of the nodes with a speaker pod that advertised the IP address. The router sends the traffic to that node. After traffic enters the node, the service proxy for the CNI network plugin distributes the traffic to all the pods for the service. The directly-connected router on the same layer 2 network segment as the cluster nodes can be configured as a BGP peer. If the directly-connected router is not configured as a BGP peer, you need to configure your network so that packets for load balancer IP addresses are routed between the BGP peers and the cluster nodes that run the speaker pods. Each time a router receives new traffic for the load balancer IP address, it creates a new connection to a node. Each router manufacturer has an implementation-specific algorithm for choosing which node to initiate the connection with. However, the algorithms commonly are designed to distribute traffic across the available nodes for the purpose of balancing the network load. If a node becomes unavailable, the router initiates a new connection with another node that has a speaker pod that advertises the load balancer IP address. Figure 31.1. MetalLB topology diagram for BGP mode The preceding graphic shows the following concepts related to MetalLB: An application is available through a service that has an IPv4 cluster IP on the 172.130.0.0/16 subnet. That IP address is accessible from inside the cluster. The service also has an external IP address that MetalLB assigned to the service, 203.0.113.200 . Nodes 2 and 3 have a pod for the application. The speaker daemon set runs a pod on each node. The MetalLB Operator starts these pods. You can configure MetalLB to specify which nodes run the speaker pods. Each speaker pod is a host-networked pod. The IP address for the pod is identical to the IP address for the node on the host network. Each speaker pod starts a BGP session with all BGP peers and advertises the load balancer IP addresses or aggregated routes to the BGP peers. The speaker pods advertise that they are part of Autonomous System 65010. The diagram shows a router, R1, as a BGP peer within the same Autonomous System. However, you can configure MetalLB to start BGP sessions with peers that belong to other Autonomous Systems. All the nodes with a speaker pod that advertises the load balancer IP address can receive traffic for the service. If the external traffic policy for the service is set to cluster , all the nodes where a speaker pod is running advertise the 203.0.113.200 load balancer IP address and all the nodes with a speaker pod can receive traffic for the service. The host prefix is advertised to the router peer only if the external traffic policy is set to cluster. If the external traffic policy for the service is set to local , then all the nodes where a speaker pod is running and at least an endpoint of the service is running can advertise the 203.0.113.200 load balancer IP address. Only those nodes can receive traffic for the service. In the preceding graphic, nodes 2 and 3 would advertise 203.0.113.200 . You can configure MetalLB to control which speaker pods start BGP sessions with specific BGP peers by specifying a node selector when you add a BGP peer custom resource. Any routers, such as R1, that are configured to use BGP can be set as BGP peers. Client traffic is routed to one of the nodes on the host network. After traffic enters the node, the service proxy sends the traffic to the application pod on the same node or another node according to the external traffic policy that you set for the service. If a node becomes unavailable, the router detects the failure and initiates a new connection with another node. You can configure MetalLB to use a Bidirectional Forwarding Detection (BFD) profile for BGP peers. BFD provides faster link failure detection so that routers can initiate new connections earlier than without BFD. 31.1.7. Limitations and restrictions 31.1.7.1. Infrastructure considerations for MetalLB MetalLB is primarily useful for on-premise, bare metal installations because these installations do not include a native load-balancer capability. In addition to bare metal installations, installations of OpenShift Container Platform on some infrastructures might not include a native load-balancer capability. For example, the following infrastructures can benefit from adding the MetalLB Operator: Bare metal VMware vSphere MetalLB Operator and MetalLB are supported with the OpenShift SDN and OVN-Kubernetes network providers. 31.1.7.2. Limitations for layer 2 mode 31.1.7.2.1. Single-node bottleneck MetalLB routes all traffic for a service through a single node, the node can become a bottleneck and limit performance. Layer 2 mode limits the ingress bandwidth for your service to the bandwidth of a single node. This is a fundamental limitation of using ARP and NDP to direct traffic. 31.1.7.2.2. Slow failover performance Failover between nodes depends on cooperation from the clients. When a failover occurs, MetalLB sends gratuitous ARP packets to notify clients that the MAC address associated with the service IP has changed. Most client operating systems handle gratuitous ARP packets correctly and update their neighbor caches promptly. When clients update their caches quickly, failover completes within a few seconds. Clients typically fail over to a new node within 10 seconds. However, some client operating systems either do not handle gratuitous ARP packets at all or have outdated implementations that delay the cache update. Recent versions of common operating systems such as Windows, macOS, and Linux implement layer 2 failover correctly. Issues with slow failover are not expected except for older and less common client operating systems. To minimize the impact from a planned failover on outdated clients, keep the old node running for a few minutes after flipping leadership. The old node can continue to forward traffic for outdated clients until their caches refresh. During an unplanned failover, the service IPs are unreachable until the outdated clients refresh their cache entries. 31.1.7.2.3. Additional Network and MetalLB cannot use same network Using the same VLAN for both MetalLB and an additional network interface set up on a source pod might result in a connection failure. This occurs when both the MetalLB IP and the source pod reside on the same node. To avoid connection failures, place the MetalLB IP in a different subnet from the one where the source pod resides. This configuration ensures that traffic from the source pod will take the default gateway. Consequently, the traffic can effectively reach its destination by using the OVN overlay network, ensuring that the connection functions as intended. 31.1.7.3. Limitations for BGP mode 31.1.7.3.1. Node failure can break all active connections MetalLB shares a limitation that is common to BGP-based load balancing. When a BGP session terminates, such as when a node fails or when a speaker pod restarts, the session termination might result in resetting all active connections. End users can experience a Connection reset by peer message. The consequence of a terminated BGP session is implementation-specific for each router manufacturer. However, you can anticipate that a change in the number of speaker pods affects the number of BGP sessions and that active connections with BGP peers will break. To avoid or reduce the likelihood of a service interruption, you can specify a node selector when you add a BGP peer. By limiting the number of nodes that start BGP sessions, a fault on a node that does not have a BGP session has no affect on connections to the service. 31.1.7.3.2. Support for a single ASN and a single router ID only When you add a BGP peer custom resource, you specify the spec.myASN field to identify the Autonomous System Number (ASN) that MetalLB belongs to. OpenShift Container Platform uses an implementation of BGP with MetalLB that requires MetalLB to belong to a single ASN. If you attempt to add a BGP peer and specify a different value for spec.myASN than an existing BGP peer custom resource, you receive an error. Similarly, when you add a BGP peer custom resource, the spec.routerID field is optional. If you specify a value for this field, you must specify the same value for all other BGP peer custom resources that you add. The limitation to support a single ASN and single router ID is a difference with the community-supported implementation of MetalLB. 31.1.8. Additional resources Comparison: Fault tolerant access to external IP addresses Removing IP failover Deployment specifications for MetalLB 31.2. Installing the MetalLB Operator As a cluster administrator, you can add the MetallB Operator so that the Operator can manage the lifecycle for an instance of MetalLB on your cluster. MetalLB and IP failover are incompatible. If you configured IP failover for your cluster, perform the steps to remove IP failover before you install the Operator. 31.2.1. Installing the MetalLB Operator from the OperatorHub using the web console As a cluster administrator, you can install the MetalLB Operator by using the OpenShift Container Platform web console. Prerequisites Log in as a user with cluster-admin privileges. Procedure In the OpenShift Container Platform web console, navigate to Operators OperatorHub . Type a keyword into the Filter by keyword box or scroll to find the Operator you want. For example, type metallb to find the MetalLB Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. On the Install Operator page, accept the defaults and click Install . Verification To confirm that the installation is successful: Navigate to the Operators Installed Operators page. Check that the Operator is installed in the openshift-operators namespace and that its status is Succeeded . If the Operator is not installed successfully, check the status of the Operator and review the logs: Navigate to the Operators Installed Operators page and inspect the Status column for any errors or failures. Navigate to the Workloads Pods page and check the logs in any pods in the openshift-operators project that are reporting issues. 31.2.2. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. You can use the OpenShift CLI ( oc ) to install the MetalLB Operator. It is recommended that when using the CLI you install the Operator in the metallb-system namespace. Prerequisites A cluster installed on bare-metal hardware. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the MetalLB Operator by entering the following command: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF Create an Operator group custom resource (CR) in the namespace: USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system EOF Confirm the Operator group is installed in the namespace: USD oc get operatorgroup -n metallb-system Example output NAME AGE metallb-operator 14m Create a Subscription CR: Define the Subscription CR and save the YAML file, for example, metallb-sub.yaml : apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators 1 sourceNamespace: openshift-marketplace 1 You must specify the redhat-operators value. To create the Subscription CR, run the following command: USD oc create -f metallb-sub.yaml Optional: To ensure BGP and BFD metrics appear in Prometheus, you can label the namespace as in the following command: USD oc label ns metallb-system "openshift.io/cluster-monitoring=true" Verification The verification steps assume the MetalLB Operator is installed in the metallb-system namespace. Confirm the install plan is in the namespace: USD oc get installplan -n metallb-system Example output NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.12.0-nnnnnnnnnnnn Automatic true Note Installation of the Operator might take a few seconds. To verify that the Operator is installed, enter the following command: USD oc get clusterserviceversion -n metallb-system \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase metallb-operator.4.12.0-nnnnnnnnnnnn Succeeded 31.2.3. Starting MetalLB on your cluster After you install the Operator, you need to configure a single instance of a MetalLB custom resource. After you configure the custom resource, the Operator starts MetalLB on your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the MetalLB Operator. Procedure This procedure assumes the MetalLB Operator is installed in the metallb-system namespace. If you installed using the web console substitute openshift-operators for the namespace. Create a single instance of a MetalLB custom resource: USD cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF Verification Confirm that the deployment for the MetalLB controller and the daemon set for the MetalLB speaker are running. Verify that the deployment for the controller is running: USD oc get deployment -n metallb-system controller Example output NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m Verify that the daemon set for the speaker is running: USD oc get daemonset -n metallb-system speaker Example output NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m The example output indicates 6 speaker pods. The number of speaker pods in your cluster might differ from the example output. Make sure the output indicates one pod for each node in your cluster. 31.2.4. Deployment specifications for MetalLB When you start an instance of MetalLB using the MetalLB custom resource, you can configure deployment specifications in the MetalLB custom resource to manage how the controller or speaker pods deploy and run in your cluster. Use these deployment specifications to manage the following tasks: Select nodes for MetalLB pod deployment. Manage scheduling by using pod priority and pod affinity. Assign CPU limits for MetalLB pods. Assign a container RuntimeClass for MetalLB pods. Assign metadata for MetalLB pods. 31.2.4.1. Limit speaker pods to specific nodes By default, when you start MetalLB with the MetalLB Operator, the Operator starts an instance of a speaker pod on each node in the cluster. Only the nodes with a speaker pod can advertise a load balancer IP address. You can configure the MetalLB custom resource with a node selector to specify which nodes run the speaker pods. The most common reason to limit the speaker pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses. Only the nodes with a running speaker pod are advertised as destinations of the load balancer IP address. If you limit the speaker pods to specific nodes and specify local for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes. Example configuration to limit speaker pods to worker nodes apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: <.> node-role.kubernetes.io/worker: "" speakerTolerations: <.> - key: "Example" operator: "Exists" effect: "NoExecute" <.> The example configuration specifies to assign the speaker pods to worker nodes, but you can specify labels that you assigned to nodes or any valid node selector. <.> In this example configuration, the pod that this toleration is attached to tolerates any taint that matches the key value and effect value using the operator . After you apply a manifest with the spec.nodeSelector field, you can check the number of pods that the Operator deployed with the oc get daemonset -n metallb-system speaker command. Similarly, you can display the nodes that match your labels with a command like oc get nodes -l node-role.kubernetes.io/worker= . You can optionally allow the node to control which speaker pods should, or should not, be scheduled on them by using affinity rules. You can also limit these pods by applying a list of tolerations. For more information about affinity rules, taints, and tolerations, see the additional resources. 31.2.4.2. Configuring pod priority and pod affinity in a MetalLB deployment You can optionally assign pod priority and pod affinity rules to controller and speaker pods by configuring the MetalLB custom resource. The pod priority indicates the relative importance of a pod on a node and schedules the pod based on this priority. Set a high priority on your controller or speaker pod to ensure scheduling priority over other pods on the node. Pod affinity manages relationships among pods. Assign pod affinity to the controller or speaker pods to control on what node the scheduler places the pod in the context of pod relationships. For example, you can use pod affinity rules to ensure that certain pods are located on the same node or nodes, which can help improve network communication and reduce latency between those components. Prerequisites You are logged in as a user with cluster-admin privileges. You have installed the MetalLB Operator. You have started the MetalLB Operator on your cluster. Procedure Create a PriorityClass custom resource, such as myPriorityClass.yaml , to configure the priority level. This example defines a PriorityClass named high-priority with a value of 1000000 . Pods that are assigned this priority class are considered higher priority during scheduling compared to pods with lower priority classes: apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000 Apply the PriorityClass custom resource configuration: USD oc apply -f myPriorityClass.yaml Create a MetalLB custom resource, such as MetalLBPodConfig.yaml , to specify the priorityClassName and podAffinity values: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: priorityClassName: high-priority 1 affinity: podAffinity: 2 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname speakerConfig: priorityClassName: high-priority affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname 1 Specifies the priority class for the MetalLB controller pods. In this case, it is set to high-priority . 2 Specifies that you are configuring pod affinity rules. These rules dictate how pods are scheduled in relation to other pods or nodes. This configuration instructs the scheduler to schedule pods that have the label app: metallb onto nodes that share the same hostname. This helps to co-locate MetalLB-related pods on the same nodes, potentially optimizing network communication, latency, and resource usage between these pods. Apply the MetalLB custom resource configuration: USD oc apply -f MetalLBPodConfig.yaml Verification To view the priority class that you assigned to pods in the metallb-system namespace, run the following command: USD oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName Example output NAME PRIORITY controller-584f5c8cd8-5zbvg high-priority metallb-operator-controller-manager-9c8d9985-szkqg <none> metallb-operator-webhook-server-c895594d4-shjgx <none> speaker-dddf7 high-priority To verify that the scheduler placed pods according to pod affinity rules, view the metadata for the pod's node or nodes by running the following command: USD oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system 31.2.4.3. Configuring pod CPU limits in a MetalLB deployment You can optionally assign pod CPU limits to controller and speaker pods by configuring the MetalLB custom resource. Defining CPU limits for the controller or speaker pods helps you to manage compute resources on the node. This ensures all pods on the node have the necessary compute resources to manage workloads and cluster housekeeping. Prerequisites You are logged in as a user with cluster-admin privileges. You have installed the MetalLB Operator. Procedure Create a MetalLB custom resource file, such as CPULimits.yaml , to specify the cpu value for the controller and speaker pods: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: resources: limits: cpu: "200m" speakerConfig: resources: limits: cpu: "300m" Apply the MetalLB custom resource configuration: USD oc apply -f CPULimits.yaml Verification To view compute resources for a pod, run the following command, replacing <pod_name> with your target pod: USD oc describe pod <pod_name> 31.2.5. Additional resources Placing pods on specific nodes using node selectors Understanding taints and tolerations Understanding pod priority Understanding pod affinity 31.2.6. steps Configuring MetalLB address pools 31.3. Upgrading the MetalLB If you are currently running version 4.10 or an earlier version of the MetalLB Operator, please note that automatic updates to any version later than 4.10 do not work. Upgrading to a newer version from any version of the MetalLB Operator that is 4.11 or later is successful. For example, upgrading from version 4.12 to version 4.13 will occur smoothly. A summary of the upgrade procedure for the MetalLB Operator from 4.10 and earlier is as follows: Delete the installed MetalLB Operator version for example 4.10. Ensure that the namespace and the metallb custom resource are not removed. Using the CLI, install the MetalLB Operator 4.12 in the same namespace where the version of the MetalLB Operator was installed. Note This procedure does not apply to automatic z-stream updates of the MetalLB Operator, which follow the standard straightforward method. For detailed steps to upgrade the MetalLB Operator from 4.10 and earlier, see the guidance that follows. As a cluster administrator, start the upgrade process by deleting the MetalLB Operator by using the OpenShift CLI ( oc ) or the web console. 31.3.1. Deleting the MetalLB Operator from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites Access to an OpenShift Container Platform cluster web console using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Search for the MetalLB Operator. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu. An Uninstall Operator? dialog box is displayed. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates. Note This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. 31.3.2. Deleting MetalLB Operator from a cluster using the CLI Cluster administrators can delete installed Operators from a selected namespace by using the CLI. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. oc command installed on workstation. Procedure Check the current version of the subscribed MetalLB Operator in the currentCSV field: USD oc get subscription metallb-operator -n metallb-system -o yaml | grep currentCSV Example output currentCSV: metallb-operator.4.10.0-202207051316 Delete the subscription: USD oc delete subscription metallb-operator -n metallb-system Example output subscription.operators.coreos.com "metallb-operator" deleted Delete the CSV for the Operator in the target namespace using the currentCSV value from the step: USD oc delete clusterserviceversion metallb-operator.4.10.0-202207051316 -n metallb-system Example output clusterserviceversion.operators.coreos.com "metallb-operator.4.10.0-202207051316" deleted 31.3.3. Editing the MetalLB Operator Operator group When upgrading from any MetalLB Operator version up to and including 4.10 to 4.11 and later, remove spec.targetNamespaces from the Operator group custom resource (CR). You must remove the spec regardless of whether you used the web console or the CLI to delete the MetalLB Operator. Note The MetalLB Operator version 4.11 or later only supports the AllNamespaces install mode, whereas 4.10 or earlier versions support OwnNamespace or SingleNamespace modes. Prerequisites You have access to an OpenShift Container Platform cluster with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). Procedure List the Operator groups in the metallb-system namespace by running the following command: USD oc get operatorgroup -n metallb-system Example output NAME AGE metallb-system-7jc66 85m Verify that the spec.targetNamespaces is present in the Operator group CR associated with the metallb-system namespace by running the following command: USD oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: "" creationTimestamp: "2023-10-25T09:42:49Z" generateName: metallb-system- generation: 1 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: "25027" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: targetNamespaces: - metallb-system upgradeStrategy: Default status: lastUpdated: "2023-10-25T09:42:49Z" namespaces: - metallb-system Edit the Operator group and remove the targetNamespaces and metallb-system present under the spec section by running the following command: USD oc edit n metallb-system Example output operatorgroup.operators.coreos.com/metallb-system-7jc66 edited Verify the spec.targetNamespaces is removed from the Operator group custom resource associated with the metallb-system namespace by running the following command: USD oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: "" creationTimestamp: "2023-10-25T09:42:49Z" generateName: metallb-system- generation: 2 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: "61658" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: upgradeStrategy: Default status: lastUpdated: "2023-10-25T14:31:30Z" namespaces: - "" 31.3.4. Upgrading the MetalLB Operator Prerequisites Access the cluster as a user with the cluster-admin role. Procedure Verify that the metallb-system namespace still exists: USD oc get namespaces | grep metallb-system Example output metallb-system Active 31m Verify the metallb custom resource still exists: USD oc get metallb -n metallb-system Example output NAME AGE metallb 33m Follow the guidance in "Installing from OperatorHub using the CLI" to install the latest 4.12 version of the MetalLB Operator. Note When installing the latest 4.12 version of the MetalLB Operator, you must install the Operator to the same namespace it was previously installed to. Verify the upgraded version of the Operator is now the 4.12 version. USD oc get csv -n metallb-system Example output NAME DISPLAY VERSION REPLACES PHASE metallb-operator.4.12.0-202207051316 MetalLB Operator 4.12.0-202207051316 Succeeded 31.3.5. Additional resources Deleting Operators from a cluster Installing the MetalLB Operator 31.4. Configuring MetalLB address pools As a cluster administrator, you can add, modify, and delete address pools. The MetalLB Operator uses the address pool custom resources to set the IP addresses that MetalLB can assign to services. The namespace used in the examples assume the namespace is metallb-system . 31.4.1. About the IPAddressPool custom resource Note The address pool custom resource definition (CRD) and API documented in "Load balancing with MetalLB" in OpenShift Container Platform 4.10 can still be used in 4.12. However, the enhanced functionality associated with advertising the IPAddressPools with layer 2 or the BGP protocol is not supported when using the address pool CRD. The fields for the IPAddressPool custom resource are described in the following table. Table 31.1. MetalLB IPAddressPool pool custom resource Field Type Description metadata.name string Specifies the name for the address pool. When you add a service, you can specify this pool name in the metallb.universe.tf/address-pool annotation to select an IP address from a specific pool. The names doc-example , silver , and gold are used throughout the documentation. metadata.namespace string Specifies the namespace for the address pool. Specify the same namespace that the MetalLB Operator uses. metadata.label string Optional: Specifies the key value pair assigned to the IPAddressPool . This can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement and L2Advertisement CRD to associate the IPAddressPool with the advertisement spec.addresses string Specifies a list of IP addresses for MetalLB Operator to assign to services. You can specify multiple ranges in a single pool; they will all share the same settings. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen. spec.autoAssign boolean Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool. Specify false if you want explicitly request an IP address from this pool with the metallb.universe.tf/address-pool annotation. The default value is true . spec.avoidBuggyIPs boolean Optional: This ensures when enabled that IP addresses ending .0 and .255 are not allocated from the pool. The default value is false . Some older consumer network equipment mistakenly block IP addresses ending in .0 and .255. 31.4.2. Configuring an address pool As a cluster administrator, you can add address pools to your cluster to control the IP addresses that MetalLB can assign to load-balancer services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75 1 This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement. Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Verification View the address pool: USD oc describe -n metallb-system IPAddressPool doc-example Example output Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: ... Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none> Confirm that the address pool name, such as doc-example , and the IP address ranges appear in the output. 31.4.3. Example address pool configurations 31.4.3.1. Example: IPv4 and CIDR ranges You can specify a range of IP addresses in CIDR notation. You can combine CIDR notation with the notation that uses a hyphen to separate lower and upper bounds. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5 31.4.3.2. Example: Reserve IP addresses You can set the autoAssign field to false to prevent MetalLB from automatically assigning the IP addresses from the pool. When you add a service, you can request a specific IP address from the pool or you can specify the pool name in an annotation to request any IP address from the pool. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false 31.4.3.3. Example: IPv4 and IPv6 addresses You can add address pools that use IPv4 and IPv6. You can specify multiple ranges in the addresses list, just like several IPv4 examples. Whether the service is assigned a single IPv4 address, a single IPv6 address, or both is determined by how you add the service. The spec.ipFamilies and spec.ipFamilyPolicy fields control how IP addresses are assigned to the service. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100 31.4.4. steps Configuring MetalLB with an L2 advertisement and label Configuring MetalLB BGP peers Configuring services to use MetalLB 31.5. About advertising for the IP address pools You can configure MetalLB so that the IP address is advertised with layer 2 protocols, the BGP protocol, or both. With layer 2, MetalLB provides a fault-tolerant external IP address. With BGP, MetalLB provides fault-tolerance for the external IP address and load balancing. MetalLB supports advertising using L2 and BGP for the same set of IP addresses. MetalLB provides the flexibility to assign address pools to specific BGP peers effectively to a subset of nodes on the network. This allows for more complex configurations, for example facilitating the isolation of nodes or the segmentation of the network. 31.5.1. About the BGPAdvertisement custom resource The fields for the BGPAdvertisements object are defined in the following table: Table 31.2. BGPAdvertisements configuration Field Type Description metadata.name string Specifies the name for the BGP advertisement. metadata.namespace string Specifies the namespace for the BGP advertisement. Specify the same namespace that the MetalLB Operator uses. spec.aggregationLength integer Optional: Specifies the number of bits to include in a 32-bit CIDR mask. To aggregate the routes that the speaker advertises to BGP peers, the mask is applied to the routes for several service IP addresses and the speaker advertises the aggregated route. For example, with an aggregation length of 24 , the speaker can aggregate several 10.0.1.x/32 service IP addresses and advertise a single 10.0.1.0/24 route. spec.aggregationLengthV6 integer Optional: Specifies the number of bits to include in a 128-bit CIDR mask. For example, with an aggregation length of 124 , the speaker can aggregate several fc00:f853:0ccd:e799::x/128 service IP addresses and advertise a single fc00:f853:0ccd:e799::0/124 route. spec.communities string Optional: Specifies one or more BGP communities. Each community is specified as two 16-bit values separated by the colon character. Well-known communities must be specified as 16-bit values: NO_EXPORT : 65535:65281 NO_ADVERTISE : 65535:65282 NO_EXPORT_SUBCONFED : 65535:65283 Note You can also use community objects that are created along with the strings. spec.localPref integer Optional: Specifies the local preference for this advertisement. This BGP attribute applies to BGP sessions within the Autonomous System. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors allows to limit the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. spec.peers string Optional: Use a list to specify the metadata.name values for each BGPPeer resource that receives advertisements for the MetalLB service IP address. The MetalLB service IP address is assigned from the IP address pool. By default, the MetalLB service IP address is advertised to all configured BGPPeer resources. Use this field to limit the advertisement to specific BGPpeer resources. 31.5.2. Configuring MetalLB with a BGP advertisement and a basic use case Configure MetalLB as follows so that the peer BGP routers receive one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. Because the localPref and communities fields are not specified, the routes are advertised with localPref set to zero and no BGP communities. 31.5.2.1. Example: Advertise a basic address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic Apply the configuration: USD oc apply -f bgpadvertisement.yaml 31.5.3. Configuring MetalLB with a BGP advertisement and an advanced use case Configure MetalLB as follows so that MetalLB assigns IP addresses to load-balancer services in the ranges between 203.0.113.200 and 203.0.113.203 and between fc00:f853:ccd:e799::0 and fc00:f853:ccd:e799::f . To explain the two BGP advertisements, consider an instance when MetalLB assigns the IP address of 203.0.113.200 to a service. With that IP address as an example, the speaker advertises two routes to BGP peers: 203.0.113.200/32 , with localPref set to 100 and the community set to the numeric value of the NO_ADVERTISE community. This specification indicates to the peer routers that they can use this route but they should not propagate information about this route to BGP peers. 203.0.113.200/30 , aggregates the load-balancer IP addresses assigned by MetalLB into a single route. MetalLB advertises the aggregated route to BGP peers with the community attribute set to 8000:800 . BGP peers propagate the 203.0.113.200/30 route to other BGP peers. When traffic is routed to a node with a speaker, the 203.0.113.200/32 route is used to forward the traffic into the cluster and to a pod that is associated with the service. As you add more services and MetalLB assigns more load-balancer IP addresses from the pool, peer routers receive one local route, 203.0.113.20x/32 , for each service, as well as the 203.0.113.200/30 aggregate route. Each service that you add generates the /30 route, but MetalLB deduplicates the routes to one BGP advertisement before communicating with peer routers. 31.5.3.1. Example: Advertise an advanced address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 31.5.4. Advertising an IP address pool from a subset of nodes To advertise an IP address from an IP addresses pool, from a specific set of nodes only, use the .spec.nodeSelector specification in the BGPAdvertisement custom resource. This specification associates a pool of IP addresses with a set of nodes in the cluster. This is useful when you have nodes on different subnets in a cluster and you want to advertise an IP addresses from an address pool from a specific subnet, for example a public-facing subnet only. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool by using a custom resource: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Control which nodes in the cluster the IP address from pool1 advertises from by defining the .spec.nodeSelector value in the BGPAdvertisement custom resource: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB In this example, the IP address from pool1 advertises from NodeA and NodeB only. 31.5.5. About the L2Advertisement custom resource The fields for the l2Advertisements object are defined in the following table: Table 31.3. L2 advertisements configuration Field Type Description metadata.name string Specifies the name for the L2 advertisement. metadata.namespace string Specifies the namespace for the L2 advertisement. Specify the same namespace that the MetalLB Operator uses. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors limits the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. Important Limiting the nodes to announce as hops is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . spec.interfaces string Optional: The list of interfaces that are used to announce the load balancer IP. 31.5.6. Configuring MetalLB with an L2 advertisement Configure MetalLB as follows so that the IPAddressPool is advertised with the L2 protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement. Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 Apply the configuration: USD oc apply -f l2advertisement.yaml 31.5.7. Configuring MetalLB with a L2 advertisement and label The ipAddressPoolSelectors field in the BGPAdvertisement and L2Advertisement custom resource definitions is used to associate the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. This example shows how to configure MetalLB so that the IPAddressPool is advertised with the L2 protocol by configuring the ipAddressPoolSelectors field. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP using ipAddressPoolSelectors . Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east Apply the configuration: USD oc apply -f l2advertisement.yaml 31.5.8. Configuring MetalLB with an L2 advertisement for selected interfaces By default, the IP address pool IP addresses that are assigned to the service are advertised from all the network interfaces. You can use the interfaces field in the L2Advertisement custom resource definition to restrict the network interfaces that advertise the addresses from a given IP address pool. This example shows how to configure MetalLB so that the IP address pool is advertised only from the network interfaces listed in the interfaces field of all nodes. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Create an IP address pool: Create a file, such as ipaddresspool.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool like the following example: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP with interfaces selector. Create a YAML file, such as l2advertisement.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB Apply the configuration for the advertisement like the following example: USD oc apply -f l2advertisement.yaml Important The interface selector does not affect how MetalLB chooses the node to announce a given IP by using L2. The chosen node does not announce the service if the node does not have the selected interface. 31.5.9. Additional resources Configuring a community alias . 31.6. Configuring MetalLB BGP peers As a cluster administrator, you can add, modify, and delete Border Gateway Protocol (BGP) peers. The MetalLB Operator uses the BGP peer custom resources to identify which peers that MetalLB speaker pods contact to start BGP sessions. The peers receive the route advertisements for the load-balancer IP addresses that MetalLB assigns to services. 31.6.1. About the BGP peer custom resource The fields for the BGP peer custom resource are described in the following table. Table 31.4. MetalLB BGP peer custom resource Field Type Description metadata.name string Specifies the name for the BGP peer custom resource. metadata.namespace string Specifies the namespace for the BGP peer custom resource. spec.myASN integer Specifies the Autonomous System number for the local end of the BGP session. Specify the same value in all BGP peer custom resources that you add. The range is 0 to 4294967295 . spec.peerASN integer Specifies the Autonomous System number for the remote end of the BGP session. The range is 0 to 4294967295 . spec.peerAddress string Specifies the IP address of the peer to contact for establishing the BGP session. spec.sourceAddress string Optional: Specifies the IP address to use when establishing the BGP session. The value must be an IPv4 address. spec.peerPort integer Optional: Specifies the network port of the peer to contact for establishing the BGP session. The range is 0 to 16384 . spec.holdTime string Optional: Specifies the duration for the hold time to propose to the BGP peer. The minimum value is 3 seconds ( 3s ). The common units are seconds and minutes, such as 3s , 1m , and 5m30s . To detect path failures more quickly, also configure BFD. spec.keepaliveTime string Optional: Specifies the maximum interval between sending keep-alive messages to the BGP peer. If you specify this field, you must also specify a value for the holdTime field. The specified value must be less than the value for the holdTime field. spec.routerID string Optional: Specifies the router ID to advertise to the BGP peer. If you specify this field, you must specify the same value in every BGP peer custom resource that you add. spec.password string Optional: Specifies the MD5 password to send to the peer for routers that enforce TCP MD5 authenticated BGP sessions. spec.passwordSecret string Optional: Specifies name of the authentication secret for the BGP Peer. The secret must live in the metallb namespace and be of type basic-auth. spec.bfdProfile string Optional: Specifies the name of a BFD profile. spec.nodeSelectors object[] Optional: Specifies a selector, using match expressions and match labels, to control which nodes can connect to the BGP peer. spec.ebgpMultiHop boolean Optional: Specifies that the BGP peer is multiple network hops away. If the BGP peer is not directly connected to the same network, the speaker cannot establish a BGP session unless this field is set to true . This field applies to external BGP . External BGP is the term that is used to describe when a BGP peer belongs to a different Autonomous System. Note The passwordSecret field is mutually exclusive with the password field, and contains a reference to a secret containing the password to use. Setting both fields results in a failure of the parsing. 31.6.2. Configuring a BGP peer As a cluster administrator, you can add a BGP peer custom resource to exchange routing information with network routers and advertise the IP addresses for services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Configure MetalLB with a BGP advertisement. Procedure Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml 31.6.3. Configure a specific set of BGP peers for a given address pool This procedure illustrates how to: Configure a set of address pools ( pool1 and pool2 ). Configure a set of BGP peers ( peer1 and peer2 ). Configure BGP advertisement to assign pool1 to peer1 and pool2 to peer2 . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create address pool pool1 . Create a file, such as ipaddresspool1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Apply the configuration for the IP address pool pool1 : USD oc apply -f ipaddresspool1.yaml Create address pool pool2 . Create a file, such as ipaddresspool2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400 Apply the configuration for the IP address pool pool2 : USD oc apply -f ipaddresspool2.yaml Create BGP peer1 . Create a file, such as bgppeer1.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer1.yaml Create BGP peer2 . Create a file, such as bgppeer2.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer2: USD oc apply -f bgppeer2.yaml Create BGP advertisement 1. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create BGP advertisement 2. Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 31.6.4. Example BGP peer configurations 31.6.4.1. Example: Limit which nodes connect to a BGP peer You can specify the node selectors field to control which nodes can connect to a BGP peer. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com] 31.6.4.2. Example: Specify a BFD profile for a BGP peer You can specify a BFD profile to associate with BGP peers. BFD compliments BGP by providing more rapid detection of communication failures between peers than BGP alone. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: "10s" bfdProfile: doc-example-bfd-profile-full Note Deleting the bidirectional forwarding detection (BFD) profile and removing the bfdProfile added to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. For more information, see BZ#2050824 . 31.6.4.3. Example: Specify BGP peers for dual-stack networking To support dual-stack networking, add one BGP peer custom resource for IPv4 and one BGP peer custom resource for IPv6. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500 31.6.5. steps Configuring services to use MetalLB 31.7. Configuring community alias As a cluster administrator, you can configure a community alias and use it across different advertisements. 31.7.1. About the community custom resource The community custom resource is a collection of aliases for communities. Users can define named aliases to be used when advertising ipAddressPools using the BGPAdvertisement . The fields for the community custom resource are described in the following table. Note The community CRD applies only to BGPAdvertisement. Table 31.5. MetalLB community custom resource Field Type Description metadata.name string Specifies the name for the community . metadata.namespace string Specifies the namespace for the community . Specify the same namespace that the MetalLB Operator uses. spec.communities string Specifies a list of BGP community aliases that can be used in BGPAdvertisements. A community alias consists of a pair of name (alias) and value (number:number). Link the BGPAdvertisement to a community alias by referring to the alias name in its spec.communities field. Table 31.6. CommunityAlias Field Type Description name string The name of the alias for the community . value string The BGP community value corresponding to the given name. 31.7.2. Configuring MetalLB with a BGP advertisement and community alias Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol and the community alias set to the numeric value of the NO_ADVERTISE community. In the following example, the peer BGP router doc-example-peer-community receives one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. A community alias is configured with the NO_ADVERTISE community. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a community alias named community1 . apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282' Create a BGP peer named doc-example-bgp-peer . Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml Create a BGP advertisement with the community alias. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer 1 Specify the CommunityAlias.name here and not the community custom resource (CR) name. Apply the configuration: USD oc apply -f bgpadvertisement.yaml 31.8. Configuring MetalLB BFD profiles As a cluster administrator, you can add, modify, and delete Bidirectional Forwarding Detection (BFD) profiles. The MetalLB Operator uses the BFD profile custom resources to identify which BGP sessions use BFD to provide faster path failure detection than BGP alone provides. 31.8.1. About the BFD profile custom resource The fields for the BFD profile custom resource are described in the following table. Table 31.7. BFD profile custom resource Field Type Description metadata.name string Specifies the name for the BFD profile custom resource. metadata.namespace string Specifies the namespace for the BFD profile custom resource. spec.detectMultiplier integer Specifies the detection multiplier to determine packet loss. The remote transmission interval is multiplied by this value to determine the connection loss detection timer. For example, when the local system has the detect multiplier set to 3 and the remote system has the transmission interval set to 300 , the local system detects failures only after 900 ms without receiving packets. The range is 2 to 255 . The default value is 3 . spec.echoMode boolean Specifies the echo transmission mode. If you are not using distributed BFD, echo transmission mode works only when the peer is also FRR. The default value is false and echo transmission mode is disabled. When echo transmission mode is enabled, consider increasing the transmission interval of control packets to reduce bandwidth usage. For example, consider increasing the transmit interval to 2000 ms. spec.echoInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send and receive echo packets. The range is 10 to 60000 . The default value is 50 ms. spec.minimumTtl integer Specifies the minimum expected TTL for an incoming control packet. This field applies to multi-hop sessions only. The purpose of setting a minimum TTL is to make the packet validation requirements more stringent and avoid receiving control packets from other sessions. The default value is 254 and indicates that the system expects only one hop between this system and the peer. spec.passiveMode boolean Specifies whether a session is marked as active or passive. A passive session does not attempt to start the connection. Instead, a passive session waits for control packets from a peer before it begins to reply. Marking a session as passive is useful when you have a router that acts as the central node of a star network and you want to avoid sending control packets that you do not need the system to send. The default value is false and marks the session as active. spec.receiveInterval integer Specifies the minimum interval that this system is capable of receiving control packets. The range is 10 to 60000 . The default value is 300 ms. spec.transmitInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send control packets. The range is 10 to 60000 . The default value is 300 ms. 31.8.2. Configuring a BFD profile As a cluster administrator, you can add a BFD profile and configure a BGP peer to use the profile. BFD provides faster path failure detection than BGP alone. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as bfdprofile.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254 Apply the configuration for the BFD profile: USD oc apply -f bfdprofile.yaml 31.8.3. steps Configure a BGP peer to use the BFD profile. 31.9. Configuring services to use MetalLB As a cluster administrator, when you add a service of type LoadBalancer , you can control how MetalLB assigns an IP address. 31.9.1. Request a specific IP address Like some other load-balancer implementations, MetalLB accepts the spec.loadBalancerIP field in the service specification. If the requested IP address is within a range from any address pool, MetalLB assigns the requested IP address. If the requested IP address is not within any range, MetalLB reports a warning. Example service YAML for a specific IP address apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address> If MetalLB cannot assign the requested IP address, the EXTERNAL-IP for the service reports <pending> and running oc describe service <service_name> includes an event like the following example. Example event when MetalLB cannot assign a requested IP address ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config 31.9.2. Request an IP address from a specific pool To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the metallb.universe.tf/address-pool annotation to request an IP address from the specified address pool. Example service YAML for an IP address from a specific pool apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer If the address pool that you specify for <address_pool_name> does not exist, MetalLB attempts to assign an IP address from any pool that permits automatic assignment. 31.9.3. Accept any IP address By default, address pools are configured to permit automatic assignment. MetalLB assigns an IP address from these address pools. To accept any IP address from any pool that is configured for automatic assignment, no special annotation or configuration is required. Example service YAML for accepting any IP address apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer 31.9.4. Share a specific IP address By default, services do not share IP addresses. However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the metallb.universe.tf/allow-shared-ip annotation to the services. apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" 5 spec: ports: - name: https port: 443 6 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 7 type: LoadBalancer loadBalancerIP: 172.31.249.7 8 1 5 Specify the same value for the metallb.universe.tf/allow-shared-ip annotation. This value is referred to as the sharing key . 2 6 Specify different port numbers for the services. 3 7 Specify identical pod selectors if you must specify externalTrafficPolicy: local so the services send traffic to the same set of pods. If you use the cluster external traffic policy, then the pod selectors do not need to be identical. 4 8 Optional: If you specify the three preceding items, MetalLB might colocate the services on the same IP address. To ensure that services share an IP address, specify the IP address to share. By default, Kubernetes does not allow multiprotocol load balancer services. This limitation would normally make it impossible to run a service like DNS that needs to listen on both TCP and UDP. To work around this limitation of Kubernetes with MetalLB, create two services: For one service, specify TCP and for the second service, specify UDP. In both services, specify the same pod selector. Specify the same sharing key and spec.loadBalancerIP value to colocate the TCP and UDP services on the same IP address. 31.9.5. Configuring a service with MetalLB You can configure a load-balancing service to use an external IP address from an address pool. Prerequisites Install the OpenShift CLI ( oc ). Install the MetalLB Operator and start MetalLB. Configure at least one address pool. Configure your network to route traffic from the clients to the host network for the cluster. Procedure Create a <service_name>.yaml file. In the file, ensure that the spec.type field is set to LoadBalancer . Refer to the examples for information about how to request the external IP address that MetalLB assigns to the service. Create the service: USD oc apply -f <service_name>.yaml Example output service/<service_name> created Verification Describe the service: USD oc describe service <service_name> Example output <.> The annotation is present if you request an IP address from a specific pool. <.> The service type must indicate LoadBalancer . <.> The load-balancer ingress field indicates the external IP address if the service is assigned correctly. <.> The events field indicates the node name that is assigned to announce the external IP address. If you experience an error, the events field indicates the reason for the error. 31.10. MetalLB logging, troubleshooting, and support If you need to troubleshoot MetalLB configuration, see the following sections for commonly used commands. 31.10.1. Setting the MetalLB logging levels MetalLB uses FRRouting (FRR) in a container with the default setting of info generates a lot of logging. You can control the verbosity of the logs generated by setting the logLevel as illustrated in this example. Gain a deeper insight into MetalLB by setting the logLevel to debug as follows: Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a file, such as setdebugloglevel.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: "" Apply the configuration: USD oc replace -f setdebugloglevel.yaml Note Use oc replace as the understanding is the metallb CR is already created and here you are changing the log level. Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s Note Speaker and controller pods are recreated to ensure the updated logging level is applied. The logging level is modified for all the components of MetalLB. View the speaker logs: USD oc logs -n metallb-system speaker-7m4qw -c speaker Example output View the FRR logs: USD oc logs -n metallb-system speaker-7m4qw -c frr Example output 31.10.1.1. FRRouting (FRR) log levels The following table describes the FRR logging levels. Table 31.8. Log levels Log level Description all Supplies all logging information for all logging levels. debug Information that is diagnostically helpful to people. Set to debug to give detailed troubleshooting information. info Provides information that always should be logged but under normal circumstances does not require user intervention. This is the default logging level. warn Anything that can potentially cause inconsistent MetalLB behaviour. Usually MetalLB automatically recovers from this type of error. error Any error that is fatal to the functioning of MetalLB . These errors usually require administrator intervention to fix. none Turn off all logging. 31.10.2. Troubleshooting BGP issues The BGP implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. As a cluster administrator, if you need to troubleshoot BGP configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m ... Display the running configuration for FRR: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show running-config" Example output <.> The router bgp section indicates the ASN for MetalLB. <.> Confirm that a neighbor <ip-address> remote-as <peer-ASN> line exists for each BGP peer custom resource that you added. <.> If you configured BFD, confirm that the BFD profile is associated with the correct BGP peer and that the BFD profile appears in the command output. <.> Confirm that the network <ip-address-range> lines match the IP address ranges that you specified in address pool custom resources that you added. Display the BGP summary: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp summary" Example output 1 1 3 Confirm that the output includes a line for each BGP peer custom resource that you added. 2 4 2 4 Output that shows 0 messages received and messages sent indicates a BGP peer that does not have a BGP session. Check network connectivity and the BGP configuration of the BGP peer. Display the BGP peers that received an address pool: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp ipv4 unicast 203.0.113.200/30" Replace ipv4 with ipv6 to display the BGP peers that received an IPv6 address pool. Replace 203.0.113.200/30 with an IPv4 or IPv6 IP address range from an address pool. Example output <.> Confirm that the output includes an IP address for a BGP peer. 31.10.3. Troubleshooting BFD issues The Bidirectional Forwarding Detection (BFD) implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. The BFD implementation relies on BFD peers also being configured as BGP peers with an established BGP session. As a cluster administrator, if you need to troubleshoot BFD configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m ... Display the BFD peers: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bfd peers brief" Example output <.> Confirm that the PeerAddress column includes each BFD peer. If the output does not list a BFD peer IP address that you expected the output to include, troubleshoot BGP connectivity with the peer. If the status field indicates down , check for connectivity on the links and equipment between the node and the peer. You can determine the node name for the speaker pod with a command like oc get pods -n metallb-system speaker-66bth -o jsonpath='{.spec.nodeName}' . 31.10.4. MetalLB metrics for BGP and BFD OpenShift Container Platform captures the following Prometheus metrics for MetalLB that relate to BGP peers and BFD profiles. metallb_bfd_control_packet_input counts the number of BFD control packets received from each BFD peer. metallb_bfd_control_packet_output counts the number of BFD control packets sent to each BFD peer. metallb_bfd_echo_packet_input counts the number of BFD echo packets received from each BFD peer. metallb_bfd_echo_packet_output counts the number of BFD echo packets sent to each BFD peer. metallb_bfd_session_down_events counts the number of times the BFD session with a peer entered the down state. metallb_bfd_session_up indicates the connection state with a BFD peer. 1 indicates the session is up and 0 indicates the session is down . metallb_bfd_session_up_events counts the number of times the BFD session with a peer entered the up state. metallb_bfd_zebra_notifications counts the number of BFD Zebra notifications for each BFD peer. metallb_bgp_announced_prefixes_total counts the number of load balancer IP address prefixes that are advertised to BGP peers. The terms prefix and aggregated route have the same meaning. metallb_bgp_session_up indicates the connection state with a BGP peer. 1 indicates the session is up and 0 indicates the session is down . metallb_bgp_updates_total counts the number of BGP update messages that were sent to a BGP peer. Additional resources See Querying metrics for information about using the monitoring dashboard. 31.10.5. About collecting MetalLB data You can use the oc adm must-gather CLI command to collect information about your cluster, your MetalLB configuration, and the MetalLB Operator. The following features and objects are associated with MetalLB and the MetalLB Operator: The namespace and child objects that the MetalLB Operator is deployed in All MetalLB Operator custom resource definitions (CRDs) The oc adm must-gather CLI command collects the following information from FRRouting (FRR) that Red Hat uses to implement BGP and BFD: /etc/frr/frr.conf /etc/frr/frr.log /etc/frr/daemons configuration file /etc/frr/vtysh.conf The log and configuration files in the preceding list are collected from the frr container in each speaker pod. In addition to the log and configuration files, the oc adm must-gather CLI command collects the output from the following vtysh commands: show running-config show bgp ipv4 show bgp ipv6 show bgp neighbor show bfd peer No additional configuration is required when you run the oc adm must-gather CLI command. Additional resources Gathering data about your cluster
[ "\"event\":\"ipAllocated\",\"ip\":\"172.22.0.201\",\"msg\":\"IP address assigned by controller", "cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system EOF", "oc get operatorgroup -n metallb-system", "NAME AGE metallb-operator 14m", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators 1 sourceNamespace: openshift-marketplace", "oc create -f metallb-sub.yaml", "oc label ns metallb-system \"openshift.io/cluster-monitoring=true\"", "oc get installplan -n metallb-system", "NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.12.0-nnnnnnnnnnnn Automatic true", "oc get clusterserviceversion -n metallb-system -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase metallb-operator.4.12.0-nnnnnnnnnnnn Succeeded", "cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF", "oc get deployment -n metallb-system controller", "NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m", "oc get daemonset -n metallb-system speaker", "NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: nodeSelector: <.> node-role.kubernetes.io/worker: \"\" speakerTolerations: <.> - key: \"Example\" operator: \"Exists\" effect: \"NoExecute\"", "apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000", "oc apply -f myPriorityClass.yaml", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: priorityClassName: high-priority 1 affinity: podAffinity: 2 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname speakerConfig: priorityClassName: high-priority affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname", "oc apply -f MetalLBPodConfig.yaml", "oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassName", "NAME PRIORITY controller-584f5c8cd8-5zbvg high-priority metallb-operator-controller-manager-9c8d9985-szkqg <none> metallb-operator-webhook-server-c895594d4-shjgx <none> speaker-dddf7 high-priority", "oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: resources: limits: cpu: \"200m\" speakerConfig: resources: limits: cpu: \"300m\"", "oc apply -f CPULimits.yaml", "oc describe pod <pod_name>", "oc get subscription metallb-operator -n metallb-system -o yaml | grep currentCSV", "currentCSV: metallb-operator.4.10.0-202207051316", "oc delete subscription metallb-operator -n metallb-system", "subscription.operators.coreos.com \"metallb-operator\" deleted", "oc delete clusterserviceversion metallb-operator.4.10.0-202207051316 -n metallb-system", "clusterserviceversion.operators.coreos.com \"metallb-operator.4.10.0-202207051316\" deleted", "oc get operatorgroup -n metallb-system", "NAME AGE metallb-system-7jc66 85m", "oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 1 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"25027\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: targetNamespaces: - metallb-system upgradeStrategy: Default status: lastUpdated: \"2023-10-25T09:42:49Z\" namespaces: - metallb-system", "oc edit n metallb-system", "operatorgroup.operators.coreos.com/metallb-system-7jc66 edited", "oc get operatorgroup metallb-system-7jc66 -n metallb-system -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: annotations: olm.providedAPIs: \"\" creationTimestamp: \"2023-10-25T09:42:49Z\" generateName: metallb-system- generation: 2 name: metallb-system-7jc66 namespace: metallb-system resourceVersion: \"61658\" uid: f5f644a0-eef8-4e31-a306-e2bbcfaffab3 spec: upgradeStrategy: Default status: lastUpdated: \"2023-10-25T14:31:30Z\" namespaces: - \"\"", "oc get namespaces | grep metallb-system", "metallb-system Active 31m", "oc get metallb -n metallb-system", "NAME AGE metallb 33m", "oc get csv -n metallb-system", "NAME DISPLAY VERSION REPLACES PHASE metallb-operator.4.12.0-202207051316 MetalLB Operator 4.12.0-202207051316 Succeeded", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75", "oc apply -f ipaddresspool.yaml", "oc describe -n metallb-system IPAddressPool doc-example", "Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none>", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic", "oc apply -f bgpadvertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100", "oc apply -f bgpadvertisement1.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124", "oc apply -f bgpadvertisement2.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2", "oc apply -f l2advertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east", "oc apply -f l2advertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB", "oc apply -f l2advertisement.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400", "oc apply -f ipaddresspool1.yaml", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400", "oc apply -f ipaddresspool2.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer1.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer2.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100", "oc apply -f bgpadvertisement1.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100", "oc apply -f bgpadvertisement2.yaml", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com]", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: \"10s\" bfdProfile: doc-example-bfd-profile-full", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124", "oc apply -f ipaddresspool.yaml", "apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282'", "apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10", "oc apply -f bgppeer.yaml", "apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer", "oc apply -f bgpadvertisement.yaml", "apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254", "oc apply -f bfdprofile.yaml", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for \"default/invalid-request\": \"4.3.2.1\" is not allowed in config", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 5 spec: ports: - name: https port: 443 6 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 7 type: LoadBalancer loadBalancerIP: 172.31.249.7 8", "oc apply -f <service_name>.yaml", "service/<service_name> created", "oc describe service <service_name>", "Name: <service_name> Namespace: default Labels: <none> Annotations: metallb.universe.tf/address-pool: doc-example <.> Selector: app=service_name Type: LoadBalancer <.> IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.237.254 IPs: 10.105.237.254 LoadBalancer Ingress: 192.168.100.5 <.> Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 30550/TCP Endpoints: 10.244.0.50:8080 Session Affinity: None External Traffic Policy: Cluster Events: <.> Type Reason Age From Message ---- ------ ---- ---- ------- Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node \"<node_name>\"", "apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc replace -f setdebugloglevel.yaml", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s", "oc logs -n metallb-system speaker-7m4qw -c speaker", "{\"branch\":\"main\",\"caller\":\"main.go:92\",\"commit\":\"3d052535\",\"goversion\":\"gc / go1.17.1 / amd64\",\"level\":\"info\",\"msg\":\"MetalLB speaker starting (commit 3d052535, branch main)\",\"ts\":\"2022-05-17T09:55:05Z\",\"version\":\"\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} I0517 09:55:06.515686 95 request.go:665] Waited for 1.026500832s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/operators.coreos.com/v1alpha1?timeout=32s {\"Starting Manager\":\"(MISSING)\",\"caller\":\"k8s.go:389\",\"level\":\"info\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"speakerlist.go:310\",\"level\":\"info\",\"msg\":\"node event - forcing sync\",\"node addr\":\"10.0.128.4\",\"node event\":\"NodeJoin\",\"node name\":\"ci-ln-qb8t3mb-72292-7s7rh-worker-a-vvznj\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"service_controller.go:113\",\"controller\":\"ServiceReconciler\",\"enqueueing\":\"openshift-kube-controller-manager-operator/metrics\",\"epslice\":\"{\\\"metadata\\\":{\\\"name\\\":\\\"metrics-xtsxr\\\",\\\"generateName\\\":\\\"metrics-\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"uid\\\":\\\"ac6766d7-8504-492c-9d1e-4ae8897990ad\\\",\\\"resourceVersion\\\":\\\"9041\\\",\\\"generation\\\":4,\\\"creationTimestamp\\\":\\\"2022-05-17T07:16:53Z\\\",\\\"labels\\\":{\\\"app\\\":\\\"kube-controller-manager-operator\\\",\\\"endpointslice.kubernetes.io/managed-by\\\":\\\"endpointslice-controller.k8s.io\\\",\\\"kubernetes.io/service-name\\\":\\\"metrics\\\"},\\\"annotations\\\":{\\\"endpoints.kubernetes.io/last-change-trigger-time\\\":\\\"2022-05-17T07:21:34Z\\\"},\\\"ownerReferences\\\":[{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"name\\\":\\\"metrics\\\",\\\"uid\\\":\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\",\\\"controller\\\":true,\\\"blockOwnerDeletion\\\":true}],\\\"managedFields\\\":[{\\\"manager\\\":\\\"kube-controller-manager\\\",\\\"operation\\\":\\\"Update\\\",\\\"apiVersion\\\":\\\"discovery.k8s.io/v1\\\",\\\"time\\\":\\\"2022-05-17T07:20:02Z\\\",\\\"fieldsType\\\":\\\"FieldsV1\\\",\\\"fieldsV1\\\":{\\\"f:addressType\\\":{},\\\"f:endpoints\\\":{},\\\"f:metadata\\\":{\\\"f:annotations\\\":{\\\".\\\":{},\\\"f:endpoints.kubernetes.io/last-change-trigger-time\\\":{}},\\\"f:generateName\\\":{},\\\"f:labels\\\":{\\\".\\\":{},\\\"f:app\\\":{},\\\"f:endpointslice.kubernetes.io/managed-by\\\":{},\\\"f:kubernetes.io/service-name\\\":{}},\\\"f:ownerReferences\\\":{\\\".\\\":{},\\\"k:{\\\\\\\"uid\\\\\\\":\\\\\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\\\\\"}\\\":{}}},\\\"f:ports\\\":{}}}]},\\\"addressType\\\":\\\"IPv4\\\",\\\"endpoints\\\":[{\\\"addresses\\\":[\\\"10.129.0.7\\\"],\\\"conditions\\\":{\\\"ready\\\":true,\\\"serving\\\":true,\\\"terminating\\\":false},\\\"targetRef\\\":{\\\"kind\\\":\\\"Pod\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"name\\\":\\\"kube-controller-manager-operator-6b98b89ddd-8d4nf\\\",\\\"uid\\\":\\\"dd5139b8-e41c-4946-a31b-1a629314e844\\\",\\\"resourceVersion\\\":\\\"9038\\\"},\\\"nodeName\\\":\\\"ci-ln-qb8t3mb-72292-7s7rh-master-0\\\",\\\"zone\\\":\\\"us-central1-a\\\"}],\\\"ports\\\":[{\\\"name\\\":\\\"https\\\",\\\"protocol\\\":\\\"TCP\\\",\\\"port\\\":8443}]}\",\"level\":\"debug\",\"ts\":\"2022-05-17T09:55:08Z\"}", "oc logs -n metallb-system speaker-7m4qw -c frr", "Started watchfrr 2022/05/17 09:55:05 ZEBRA: client 16 says hello and bids fair to announce only bgp routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 31 says hello and bids fair to announce only vnc routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 38 says hello and bids fair to announce only static routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 43 says hello and bids fair to announce only bfd routes vrf=0 2022/05/17 09:57:25.089 BGP: Creating Default VRF, AS 64500 2022/05/17 09:57:25.090 BGP: dup addr detect enable max_moves 5 time 180 freeze disable freeze_time 0 2022/05/17 09:57:25.090 BGP: bgp_get: Registering BGP instance (null) to zebra 2022/05/17 09:57:25.090 BGP: Registering VRF 0 2022/05/17 09:57:25.091 BGP: Rx Router Id update VRF 0 Id 10.131.0.1/32 2022/05/17 09:57:25.091 BGP: RID change : vrf VRF default(0), RTR ID 10.131.0.1 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF br0 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ens4 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr 10.0.128.4/32 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr fe80::c9d:84da:4d86:5618/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF lo 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ovs-system 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF tun0 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr 10.131.0.1/23 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr fe80::40f1:d1ff:feb6:5322/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2da49fed 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2da49fed addr fe80::24bd:d1ff:fec1:d88/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2fa08c8c 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2fa08c8c addr fe80::6870:ff:fe96:efc8/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth41e356b7 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth41e356b7 addr fe80::48ff:37ff:fede:eb4b/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth1295c6e2 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth1295c6e2 addr fe80::b827:a2ff:feed:637/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth9733c6dc 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth9733c6dc addr fe80::3cf4:15ff:fe11:e541/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth336680ea 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth336680ea addr fe80::94b1:8bff:fe7e:488c/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vetha0a907b7 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vetha0a907b7 addr fe80::3855:a6ff:fe73:46c3/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf35a4398 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf35a4398 addr fe80::40ef:2fff:fe57:4c4d/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf831b7f4 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf831b7f4 addr fe80::f0d9:89ff:fe7c:1d32/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vxlan_sys_4789 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vxlan_sys_4789 addr fe80::80c1:82ff:fe4b:f078/64 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Timer (start timer expire). 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] BGP_Start (Idle->Connect), fd -1 2022/05/17 09:57:26.094 BGP: Allocated bnc 10.0.0.1/32(0)(VRF default) peer 0x7f807f7631a0 2022/05/17 09:57:26.094 BGP: sendmsg_zebra_rnh: sending cmd ZEBRA_NEXTHOP_REGISTER for 10.0.0.1/32 (vrf VRF default) 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Waiting for NHT 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Connect established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Idle to Connect 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] TCP_connection_open_failed (Connect->Active), fd -1 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Active established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Connect to Active 2022/05/17 09:57:26.094 ZEBRA: rnh_register msg from client bgp: hdr->length=8, type=nexthop vrf=0 2022/05/17 09:57:26.094 ZEBRA: 0: Add RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: Evaluate RNH, type Nexthop (force) 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: NH has become unresolved 2022/05/17 09:57:26.094 ZEBRA: 0: Client bgp registers for RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 BGP: VRF default(0): Rcvd NH update 10.0.0.1/32(0) - metric 0/0 #nhops 0/0 flags 0x6 2022/05/17 09:57:26.094 BGP: NH update for 10.0.0.1/32(0)(VRF default) - flags 0x6 chgflags 0x0 - evaluate paths 2022/05/17 09:57:26.094 BGP: evaluate_paths: Updating peer (10.0.0.1(VRF default)) status with NHT 2022/05/17 09:57:30.081 ZEBRA: Event driven route-map update triggered 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-out 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-in 2022/05/17 09:57:31.104 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.104 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring 2022/05/17 09:57:31.105 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.105 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show running-config\"", "Building configuration Current configuration: ! frr version 7.5.1_git frr defaults traditional hostname some-hostname log file /etc/frr/frr.log informational log timestamp precision 3 service integrated-vtysh-config ! router bgp 64500 1 bgp router-id 10.0.1.2 no bgp ebgp-requires-policy no bgp default ipv4-unicast no bgp network import-check neighbor 10.0.2.3 remote-as 64500 2 neighbor 10.0.2.3 bfd profile doc-example-bfd-profile-full 3 neighbor 10.0.2.3 timers 5 15 neighbor 10.0.2.4 remote-as 64500 4 neighbor 10.0.2.4 bfd profile doc-example-bfd-profile-full 5 neighbor 10.0.2.4 timers 5 15 ! address-family ipv4 unicast network 203.0.113.200/30 6 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! address-family ipv6 unicast network fc00:f853:ccd:e799::/124 7 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! route-map 10.0.2.3-in deny 20 ! route-map 10.0.2.4-in deny 20 ! ip nht resolve-via-default ! ipv6 nht resolve-via-default ! line vty ! bfd profile doc-example-bfd-profile-full 8 transmit-interval 35 receive-interval 35 passive-mode echo-mode echo-interval 35 minimum-ttl 10 ! ! end", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp summary\"", "IPv4 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 0 1 1 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 2 Total number of neighbors 2 IPv6 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 NoNeg 3 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 4 Total number of neighbors 2", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp ipv4 unicast 203.0.113.200/30\"", "BGP routing table entry for 203.0.113.200/30 Paths: (1 available, best #1, table default) Advertised to non peer-group peers: 10.0.2.3 <.> Local 0.0.0.0 from 0.0.0.0 (10.0.1.2) Origin IGP, metric 0, weight 32768, valid, sourced, local, best (First path received) Last update: Mon Jan 10 19:49:07 2022", "oc get -n metallb-system pods -l component=speaker", "NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m", "oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bfd peers brief\"", "Session count: 2 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3909139637 10.0.1.2 10.0.2.3 up <.>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/load-balancing-with-metallb
Chapter 18. Completing initial setup
Chapter 18. Completing initial setup The Initial Setup window opens the first time you reboot your system after the installation process is complete, if you have selected the Server with GUI base environment during installation. If you have registered and installed RHEL from the CDN, the Subscription Manager option displays a note that all installed products are covered by valid entitlements. The information displayed in the Initial Setup window might vary depending on what was configured during installation. At a minimum, the Licensing and Subscription Manager options are displayed. Prerequisites You have completed the graphical installation. You have an active, non-evaluation Red Hat Enterprise Linux subscription. Procedure From the Initial Setup window, select Licensing Information . The License Agreement window opens and displays the licensing terms for Red Hat Enterprise Linux. Review the license agreement and select the I accept the license agreement checkbox. You must accept the license agreement to proceed. Exiting Initial Setup without accepting license agreement causes a system restart. When the restart process is complete, you are prompted to accept the license agreement again. Click Done to apply the settings and return to the Initial Setup window. Optional: Click Finish Configuration , if you did not configure network settings earlier as you cannot register your system immediately. Red Hat Enterprise Linux 8 starts and you can login, activate access to the network, and register your system. See Subscription manager post installation for more information. If you have configured network settings, as described in Network hostname , you can register your system immediately, as shown in the following steps. From the Initial Setup window, select Subscription Manager . The Subscription Manager graphical interface opens and displays the option you are going to register, which is: subscription.rhsm.redhat.com . To register with Activation Key, select I will use an activation key . For more information about how to view activation keys, see Creating and managing activation keys . Click . Do one of the following: If you selected to register by using the activation key, enter the Organization (your Organization ID) and Activation key . To manually attach the subscription, select the Manually attach subscriptions after registration option. If you are not using activation keys and manual registration, enter your Login and Password details. Enter System Name . Click Register . Confirm the Subscription details and click Attach . You must receive the following confirmation message: Registration with Red Hat Subscription Management is Done! Click Done . The Initial Setup window opens. Click Finish Configuration . The login window opens. Configure your system. See the Configuring basic system settings document for more information. Methods to register RHEL Depending on your requirements, there are five methods to register your system: Using the Red Hat Content Delivery Network (CDN) to register your system, attach RHEL subscriptions, and install Red Hat Enterprise Linux. During installation by using Initial Setup . After installation by using the command line. After installation by using the Subscription Manager user interface. After installation by using Registration Assistant. Registration Assistant is designed to help you choose the most suitable registration option for your Red Hat Enterprise Linux environment. See https://access.redhat.com/labs/registrationassistant/ for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/completing-initial-setup_rhel-installer
Chapter 1. New features and enhancements
Chapter 1. New features and enhancements 1.1. Jakarta EE Jakarta EE Core Profile Jakarta EE 10 Core Profile is now available in JBoss EAP XP 5.0.0. The Core Profile is a small, lightweight profile that provides Jakarta EE specifications suitable for smaller runtimes, such as microservices and cloud services. The Jakarta EE 10 Core Profile is available as a Galleon provisioning layer, ee-core-profile-server . An example configuration file standalone-core-microprofile.xml is provided with JBoss EAP XP 5.0.0 in the EAP_HOME/standalone/configuration directory. 1.2. MicroProfile MicroProfile Telemetry The JBoss EAP XP 5.0.0 provides support for MicroProfile Telemetry through the microprofile-telemetry subsystem. This subsystem builds on top of the existing OpenTelemetry subsystem and replaces MicroProfile OpenTracing to provide tracing functionality. For more information, see MicroProfile Telemetry in JBoss EAP and MicroProfile Telemetry administration in Using JBoss EAP XP 5.0. Configure root directories as ConfigSources You can now specify a root directory for multiple MicroProfile ConfigSource directories. This means that you do not need to define multiple ConfigSource directories if they share the same parent root directory. For more information, see Configuring root directories as ConfigSources in Using JBoss EAP XP 5.0. Support for MicroProfile Long Running Action (LRA) JBoss EAP XP provides MicroProfile Long Running Action (LRA), a standalone MicroProfile specification providing an API for distributed transactions handling based on the saga pattern. This provides a way for transaction handling without the need of taking locks on the data handled in the transaction. For more information, see EAP XP 5 - MicroProfile LRA with Narayana . Important MicroProfile Long Running Action is provided as Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Support for MicroProfile Reactive Messaging 3.0 JBoss EAP XP 5.0 includes an Advanced Message Queuing Protocol (AMQP) connector with MicroProfile Reactive Messaging. You can use the AMQP connector to connect with AMQ-compliant brokers for asynchronous messaging. For more information, see MicroProfile Reactive Messaging connectors . Red Hat AMQ Streams Using Red Hat AMQ Streams with MicroProfile Reactive Messaging on Red Hat OpenShift Container Platform is now fully supported. In the release, the support for Red Hat AMQ Streams with MicroProfile Reactive Messaging was technology preview. 1.3. Micrometer Support for Micrometer subsystem The JBoss EAP XP 5.0 provides support for Micrometer subsystem and replaces MicroProfile Metrics subsystem. The MeterRegistry of Micrometer is now accessible through CDI, allowing for the inclusion of application metrics with server and JVM metrics. This integration improves monitoring capabilities, providing a solution for tracking system and application performance metrics. For more information, see Micrometer in JBoss EAP and Micrometer administration in Using JBoss EAP XP 5.0. 1.4. Quickstarts Supported JBoss EAP XP 5.0 quickstarts All supported JBoss EAP XP 5.0 quickstarts are located at jboss-eap-quickstarts . The following quickstarts are supported and included with JBoss EAP XP 5.0: Quickstart Name Demonstrated Technologies Description Experience Level Required micrometer Micrometer The micrometer quickstart demonstrates the use of the Micrometer library in 5.0. Beginner microprofile-config MicroProfile Config The microprofile-config quickstart demonstrates the use of the MicroProfile Config specification in 5.0. Beginner microprofile-fault-tolerance MicroProfile, Fault Tolerance The microprofile-fault-tolerance quickstart demonstrates how to use Eclipse MicroProfile Fault Tolerance in 5.0. Intermediate microprofile-health MicroProfile Health The microprofile-health quickstart demonstrates the use of the MicroProfile Health specification in 5.0. Beginner microprofile-jwt JWT, Security, MicroProfile The microprofile-jwt quickstart demonstrates the use of the MicroProfile JWT specification in 5.0. Intermediate microprofile-lra MicroProfile LRA The microprofile-lra quickstart demonstrates the use of the MicroProfile LRA specification in 5.0. Beginner microprofile-openapi MicroProfile OpenAPI This guide demonstrate how to use the MicroProfile OpenAPI functionality in 5.0 to expose an OpenAPI document for a simple REST application. Beginner microprofile-reactive-messaging-kafka MicroProfile Reactive Messaging The microprofile-reactive-messaging-kafka quickstart demonstrates the use of the MicroProfile Reactive Messaging specification backed by Apache Kafka in 5.0. Beginner microprofile-rest-client MicroProfile REST Client The microprofile-rest-client quickstart demonstrates the use of the MicroProfile REST Client specification in 5.0. Beginner opentelemetry-tracing OpenTelemetry Tracing The opentelemetry-tracing quickstart demonstrates the use of the OpenTelemetry tracing specification in 5.0. Beginner The following quickstarts are not supported and not included with JBoss EAP XP 5.0: microprofile-metrics microprofile-opentracing todo-backend Important The JBoss EAP XP Quickstarts for Openshift are provided as Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/red_hat_jboss_eap_xp_5.0_release_notes/new_features_and_enhancements
Chapter 7. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure
Chapter 7. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure You can remove a cluster that you deployed in your VMware vSphere instance by using installer-provisioned infrastructure. Note When you run the openshift-install destroy cluster command to uninstall OpenShift Container Platform, vSphere volumes are not automatically deleted. The cluster administrator must manually find the vSphere volumes and delete them. 7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program.
[ "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_vsphere/uninstalling-cluster-vsphere-installer-provisioned
Red Hat Single Sign-On for OpenShift on OpenJDK
Red Hat Single Sign-On for OpenShift on OpenJDK Red Hat Single Sign-On 7.4 For Use with Red Hat Single Sign-On 7.4 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/red_hat_single_sign-on_for_openshift_on_openjdk/index
Chapter 8. Preparing networks for RHOSO with NFV
Chapter 8. Preparing networks for RHOSO with NFV To prepare for configuring and deploying your Red Hat OpenStack Services on OpenShift (RHOSO) on a network functions virtualization (NFV) environment, you must configure the Red Hat OpenShift Container Platform (RHOCP) networks on your RHOCP cluster. 8.1. Default Red Hat OpenStack Services on OpenShift networks The following physical data center networks are typically implemented for a Red Hat OpenStack Services on OpenShift (RHOSO) deployment: Control plane network: This network is used by the OpenStack Operator for Ansible SSH access to deploy and connect to the data plane nodes from the Red Hat OpenShift Container Platform (RHOCP) environment. This network is also used by data plane nodes for live migration of instances. External network: (Optional) You can configure an external network if one is required for your environment. For example, you might create an external network for any of the following purposes: To provide virtual machine instances with Internet access. To create flat provider networks that are separate from the control plane. To configure VLAN provider networks on a separate bridge from the control plane. To provide access to virtual machine instances with floating IPs on a network other than the control plane network. Internal API network: This network is used for internal communication between RHOSO components. Storage network: This network is used for block storage, RBD, NFS, FC, and iSCSI. Tenant (project) network: This network is used for data communication between virtual machine instances within the cloud deployment. Storage Management network: (Optional) This network is used by storage components. For example, Red Hat Ceph Storage uses the Storage Management network in a hyperconverged infrastructure (HCI) environment as the cluster_network to replicate data. Note For more information on Red Hat Ceph Storage network configuration, see Ceph network configuration in the Red Hat Ceph Storage Configuration Guide . The following table details the default networks used in a RHOSO deployment. If required, you can update the networks for your environment. Note By default, the control plane and external networks do not use VLANs. Networks that do not use VLANs must be placed on separate NICs. You can use a VLAN for the control plane network on new RHOSO deployments. You can also use the Native VLAN on a trunked interface as the non-VLAN network. For example, you can have the control plane and the internal API on one NIC, and the external network with no VLAN on a separate NIC. Table 8.1. Default RHOSO networks Network name VLAN CIDR NetConfig allocationRange MetalLB IPAddressPool range net-attach-def ipam range OCP worker nncp range ctlplane n/a 192.168.122.0/24 192.168.122.100 - 192.168.122.250 192.168.122.80 - 192.168.122.90 192.168.122.30 - 192.168.122.70 192.168.122.10 - 192.168.122.20 external n/a 10.0.0.0/24 10.0.0.100 - 10.0.0.250 n/a n/a internalapi 20 172.17.0.0/24 172.17.0.100 - 172.17.0.250 172.17.0.80 - 172.17.0.90 172.17.0.30 - 172.17.0.70 172.17.0.10 - 172.17.0.20 storage 21 172.18.0.0/24 172.18.0.100 - 172.18.0.250 n/a 172.18.0.30 - 172.18.0.70 172.18.0.10 - 172.18.0.20 tenant 22 172.19.0.0/24 172.19.0.100 - 172.19.0.250 n/a 172.19.0.30 - 172.19.0.70 172.19.0.10 - 172.19.0.20 storageMgmt 23 172.20.0.0/24 172.20.0.100 - 172.20.0.250 n/a 172.20.0.30 - 172.20.0.70 172.20.0.10 - 172.20.0.20 8.2. NIC configurations for NFV The Red Hat OpenStack Services on OpenShift (RHOSO) nodes that host the data plane require one of the following NIC configurations: Single NIC configuration - One NIC for the provisioning network on the native VLAN and tagged VLANs that use subnets for the different data plane network types. Dual NIC configuration - One NIC for the provisioning network and the other NIC for the external network. Dual NIC configuration - One NIC for the provisioning network on the native VLAN, and the other NIC for tagged VLANs that use subnets for different data plane network types. Multiple NIC configuration - Each NIC uses a subnet for a different data plane network type. 8.3. Preparing RHOCP for RHOSO networks The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You use the NMState Operator to connect the worker nodes to the required isolated networks. You use the MetalLB Operator to expose internal service endpoints on the isolated networks. By default, the public service endpoints are exposed as RHOCP routes. Note The examples in the following procedures use IPv4 addresses. You can use IPv6 addresses instead of IPv4 addresses. Dual stack IPv4/6 is not available. For information about how to configure IPv6 addresses, see the following resources in the RHOCP Networking guide: Installing the Kubernetes NMState Operator Configuring MetalLB address pools 8.3.1. Preparing RHOCP with isolated network interfaces Create a NodeNetworkConfigurationPolicy ( nncp ) CR to configure the interfaces for each isolated network on each worker node in RHOCP cluster. Procedure Create a NodeNetworkConfigurationPolicy ( nncp ) CR file on your workstation, for example, openstack-nncp.yaml . Retrieve the names of the worker nodes in the RHOCP cluster: Discover the network configuration: Replace <worker_node> with the name of a worker node retrieved in step 2, for example, worker-1 . Repeat this step for each worker node. In the nncp CR file, configure the interfaces for each isolated network on each worker node in the RHOCP cluster. For information about the default physical data center networks that must be configured with network isolation, see Default Red Hat OpenStack Services on OpenShift networks . In the following example, the nncp CR configures the enp6s0 interface for worker node 1, osp-enp6s0-worker-1 , to use VLAN interfaces with IPv4 addresses for network isolation: Create the nncp CR in the cluster: Verify that the nncp CR is created: 8.3.2. Attaching service pods to the isolated networks Create a NetworkAttachmentDefinition ( net-attach-def ) custom resource (CR) for each isolated network to attach the service pods to the networks. Procedure Create a NetworkAttachmentDefinition ( net-attach-def ) CR file on your workstation, for example, openstack-net-attach-def.yaml . In the NetworkAttachmentDefinition CR file, configure a NetworkAttachmentDefinition resource for each isolated network to attach a service deployment pod to the network. The following examples create a NetworkAttachmentDefinition resource for the internalapi , storage , ctlplane , and tenant networks of type macvlan : 1 The namespace where the services are deployed. 2 The node interface name associated with the network, as defined in the nncp CR. 3 The whereabouts CNI IPAM plugin to assign IPs to the created pods from the range .30 - .70 . 4 The IP address pool range must not overlap with the MetalLB IPAddressPool range and the NetConfig allocationRange . Create the NetworkAttachmentDefinition CR in the cluster: Verify that the NetworkAttachmentDefinition CR is created: 8.3.3. Preparing RHOCP for RHOSO network VIPS The Red Hat OpenStack Services on OpenShift (RHOSO) services run as a Red Hat OpenShift Container Platform (RHOCP) workload. You must create an L2Advertisement resource to define how the Virtual IPs (VIPs) are announced, and an IPAddressPool resource to configure which IPs can be used as VIPs. In layer 2 mode, one node assumes the responsibility of advertising a service to the local network. Procedure Create an IPAddressPool CR file on your workstation, for example, openstack-ipaddresspools.yaml . In the IPAddressPool CR file, configure an IPAddressPool resource on the isolated network to specify the IP address ranges over which MetalLB has authority: 1 The IPAddressPool range must not overlap with the whereabouts IPAM range and the NetConfig allocationRange . For information about how to configure the other IPAddressPool resource parameters, see Configuring MetalLB address pools in the RHOCP Networking guide. Create the IPAddressPool CR in the cluster: Verify that the IPAddressPool CR is created: Create a L2Advertisement CR file on your workstation, for example, openstack-l2advertisement.yaml . In the L2Advertisement CR file, configure L2Advertisement CRs to define which node advertises a service to the local network. Create one L2Advertisement resource for each network. In the following example, each L2Advertisement CR specifies that the VIPs requested from the network address pools are announced on the interface that is attached to the VLAN: 1 The interface where the VIPs requested from the VLAN address pool are announced. For information about how to configure the other L2Advertisement resource parameters, see Configuring MetalLB with a L2 advertisement and label in the RHOCP Networking guide. Create the L2Advertisement CRs in the cluster: Verify that the L2Advertisement CRs are created: If your cluster has OVNKubernetes as the network back end, then you must enable global forwarding so that MetalLB can work on a secondary network interface. Check the network back end used by your cluster: If the back end is OVNKubernetes, then run the following command to enable global IP forwarding: 8.4. Creating the data plane network To create the data plane network, you define a NetConfig custom resource (CR) and specify all the subnets for the data plane networks. You must define at least one control plane network for your data plane. You can also define VLAN networks to create network isolation for composable networks, such as InternalAPI , Storage , and External . Each network definition must include the IP address assignment. Tip Use the following commands to view the NetConfig CRD definition and specification schema: Procedure Create a file named openstack_netconfig.yaml on your workstation. Add the following configuration to openstack_netconfig.yaml to create the NetConfig CR: In the openstack_netconfig.yaml file, define the topology for each data plane network. To use the default Red Hat OpenStack Services on OpenShift (RHOSO) networks, you must define a specification for each network. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks . The following example creates isolated networks for the data plane: 1 The name of the network, for example, CtlPlane . 2 The IPv4 subnet specification. 3 The name of the subnet, for example, subnet1 . 4 The NetConfig allocationRange . The allocationRange must not overlap with the MetalLB IPAddressPool range and the IP address pool range. 5 Optional: List of IP addresses from the allocation range that must not be used by data plane nodes. 6 The network VLAN. For information about the default RHOSO networks, see Default Red Hat OpenStack Services on OpenShift networks . Save the openstack_netconfig.yaml definition file. Create the data plane network: To verify that the data plane network is created, view the openstacknetconfig resource: If you see errors, check the underlying network-attach-definition and node network configuration policies:
[ "oc get nodes -l node-role.kubernetes.io/worker -o jsonpath=\"{.items[*].metadata.name}\"", "oc get nns/<worker_node> -o yaml | more", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: osp-enp6s0-worker-1 spec: desiredState: interfaces: - description: internalapi vlan interface ipv4: address: - ip: 172.17.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: internalapi state: up type: vlan vlan: base-iface: enp6s0 id: 20 reorder-headers: true - description: storage vlan interface ipv4: address: - ip: 172.18.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: storage state: up type: vlan vlan: base-iface: enp6s0 id: 21 reorder-headers: true - description: tenant vlan interface ipv4: address: - ip: 172.19.0.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false name: tenant state: up type: vlan vlan: base-iface: enp6s0 id: 22 reorder-headers: true - description: Configuring enp6s0 ipv4: address: - ip: 192.168.122.10 prefix-length: 24 enabled: true dhcp: false ipv6: enabled: false mtu: 1500 name: enp6s0 state: up type: ethernet nodeSelector: kubernetes.io/hostname: worker-1 node-role.kubernetes.io/worker: \"\"", "oc apply -f openstack-nncp.yaml", "oc get nncp -w NAME STATUS REASON osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Progressing ConfigurationProgressing osp-enp6s0-worker-1 Available SuccessfullyConfigured", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: internalapi namespace: openstack 1 spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"internalapi\", \"type\": \"macvlan\", \"master\": \"internalapi\", 2 \"ipam\": { 3 \"type\": \"whereabouts\", \"range\": \"172.17.0.0/24\", \"range_start\": \"172.17.0.30\", 4 \"range_end\": \"172.17.0.70\" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ctlplane namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"ctlplane\", \"type\": \"macvlan\", \"master\": \"enp6s0\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.168.122.0/24\", \"range_start\": \"192.168.122.30\", \"range_end\": \"192.168.122.70\" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: storage namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"storage\", \"type\": \"macvlan\", \"master\": \"storage\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"172.18.0.0/24\", \"range_start\": \"172.18.0.30\", \"range_end\": \"172.18.0.70\" } } --- apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: tenant namespace: openstack spec: config: | { \"cniVersion\": \"0.3.1\", \"name\": \"tenant\", \"type\": \"macvlan\", \"master\": \"tenant\", \"ipam\": { \"type\": \"whereabouts\", \"range\": \"172.19.0.0/24\", \"range_start\": \"172.19.0.30\", \"range_end\": \"172.19.0.70\" } }", "oc apply -f openstack-net-attach-def.yaml", "oc get net-attach-def -n openstack", "apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: internalapi namespace: metallb-system spec: addresses: - 172.17.0.80-172.17.0.90 1 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: ctlplane spec: addresses: - 192.168.122.80-192.168.122.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: storage spec: addresses: - 172.18.0.80-172.18.0.90 autoAssign: true avoidBuggyIPs: false --- apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: tenant spec: addresses: - 172.19.0.80-172.19.0.90 autoAssign: true avoidBuggyIPs: false", "oc apply -f openstack-ipaddresspools.yaml", "oc describe -n metallb-system IPAddressPool", "apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: internalapi namespace: metallb-system spec: ipAddressPools: - internalapi interfaces: - internalapi 1 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: ctlplane namespace: metallb-system spec: ipAddressPools: - ctlplane interfaces: - enp6s0 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: storage namespace: metallb-system spec: ipAddressPools: - storage interfaces: - storage --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: tenant namespace: metallb-system spec: ipAddressPools: - tenant interfaces: - tenant", "oc apply -f openstack-l2advertisement.yaml", "oc get -n metallb-system L2Advertisement NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES ctlplane [\"ctlplane\"] [\"enp6s0\"] internalapi [\"internalapi\"] [\"internalapi\"] storage [\"storage\"] [\"storage\"] tenant [\"tenant\"] [\"tenant\"]", "oc get network.operator cluster --output=jsonpath='{.spec.defaultNetwork.type}'", "oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipForwarding\": \"Global\"}}}}}' --type=merge", "oc describe crd netconfig oc explain netconfig.spec", "apiVersion: network.openstack.org/v1beta1 kind: NetConfig metadata: name: openstacknetconfig namespace: openstack", "spec: networks: - name: CtlPlane 1 dnsDomain: ctlplane.example.com subnets: 2 - name: subnet1 3 allocationRanges: 4 - end: 192.168.122.120 start: 192.168.122.100 - end: 192.168.122.200 start: 192.168.122.150 cidr: 192.168.122.0/24 gateway: 192.168.122.1 - name: InternalApi dnsDomain: internalapi.example.com subnets: - name: subnet1 allocationRanges: - end: 172.17.0.250 start: 172.17.0.100 excludeAddresses: 5 - 172.17.0.10 - 172.17.0.12 cidr: 172.17.0.0/24 vlan: 20 6 - name: External dnsDomain: external.example.com subnets: - name: subnet1 allocationRanges: - end: 10.0.0.250 start: 10.0.0.100 cidr: 10.0.0.0/24 gateway: 10.0.0.1 - name: Storage dnsDomain: storage.example.com subnets: - name: subnet1 allocationRanges: - end: 172.18.0.250 start: 172.18.0.100 cidr: 172.18.0.0/24 vlan: 21 - name: Tenant dnsDomain: tenant.example.com subnets: - name: subnet1 allocationRanges: - end: 172.19.0.250 start: 172.19.0.100 cidr: 172.19.0.0/24 vlan: 22", "oc create -f openstack_netconfig.yaml -n openstack", "oc get netconfig/openstacknetconfig -n openstack", "oc get network-attachment-definitions -n openstack oc get nncp" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/deploying_a_network_functions_virtualization_environment/assembly_preparing-RHOSO-networks
Chapter 19. Working with Operators
Chapter 19. Working with Operators Note Validate or Certify your operator image or necessary container image as a component before proceeding with partner validation or Red Hat Operator certification. All containers referenced in an Operator Bundle must already be validated or certified and published in the Red Hat Ecosystem Catalog prior to beginning to certify an Operator Bundle. 19.1. Introduction to Operators A Kubernetes operator is a method of packaging, deploying, and managing a Kubernetes application. Our Operator certification program ensures that the partner's operator is deployable by Operator Lifecycle Manager (OLM) on the OpenShift platform and is formatted properly, using Red Hat certified container images. Partner Validation - Select this type of certification, if you want to validate your product using your own criteria and test suite on Red Hat platforms. This partner validation allows you to publish your software offerings on the Red Hat Ecosystem Catalog more quickly. However, validated workloads may not incorporate all of Red Hat integration requirements and best practices. We encourage you to continue your efforts toward Red Hat certification. Certified - Select this type of certification, if you want your product to undergo thorough testing by using Red Hat's test suite, and benefit from collaborative support. Your products will meet your standards and Red Hat's criteria, including interoperability, lifecycle management, security, and support requirements. Products that meet the requirements and complete the certification workflow get listed on the Red Hat Ecosystem Catalog. Partners will receive a logo to promote their product certification. 19.2. Certification workflow for Operators Note Red Hat recommends that you are a Red Hat Certified Engineer or hold equivalent experience before starting the certification process. Task Summary The certification workflow includes three primary steps- Section 19.2.1, "Certification on-boarding for Operators" Section 19.2.2, "Certification testing for Operators" Section 19.2.3, "Publishing the certified Operator on the Red Hat Ecosystem Catalog" 19.2.1. Certification on-boarding for Operators Perform the steps outlined for certification onboarding: Join the Red Hat Connect for Technology Partner Program. Agree to the program terms and conditions. Create your product listing by selecting your desired product category. You can select from the available product categories: Containerized Application Standalone Application OpenStack Infrastructure Complete your company profile. Add components to the product listing. Certify components for your product listing. Additional resources For detailed instructions about creating your first product listing, see Creating a product . 19.2.2. Certification testing for Operators To run the certification test: Fork the Red Hat upstream repository. Install and run the Red Hat certification pipeline on your test environment. Review the test results and troubleshoot, if any issues. Submit the certification results to Red Hat through a pull request. If you want Red Hat to run all the tests then create a pull request. This triggers the certification hosted pipeline to run all the certification checks on Red Hat infrastructure. Note It is possible that some operator releases seemingly disappear from the catalog, which happens when the graph gets automatically pruned, resulting in some operator versions being excluded from the update graph. Because of that, you will get blocked from releasing an operator bundle when it results in a channel with fewer or equal release versions than the one before. In the case that you want to prune the graph intentionally, you can do so by skipping a test and restarting the pipeline using the following available commands in your pull request: /test skip <test_case_name> test_case_name test will be skipped. Note that only a subset of tests can be skipped. /pipeline restart certified-hosted-pipeline The hosted pipeline will re-trigger. Additional resources For detailed instructions about certification testing, see Running the certification test suite . 19.2.3. Publishing the certified Operator on the Red Hat Ecosystem Catalog The Partner Validated or Certified Operator must be added to your product's Product Listing page on the Red Hat Partner Connect portal. Once published, your product listing is displayed on the Red Hat Ecosystem Catalog , by using the product information that you provide. You can publish both the Partner Validated and Certified Operator on the Red Hat Ecosystem Catalog with the respective labels. Additional resources For more details about operators, see: Operators Operator Framework Operator Capability Levels Packaging Applications and Services with Kubernetes Operators
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/assembly_working-with-operators_openshift-sw-cert-workflow-publishing-the-certified-container
23.7. Integrating Identity Management Smart-card Authentication with Web Applications
23.7. Integrating Identity Management Smart-card Authentication with Web Applications As a developer whose applications use the Identity Management server as an authentication back end through the Identity Management web infrastructure Apache modules, you can configure the applications to enable authentication of users with multiple role accounts linked to their smart card. This enables these users to use the application under allowed role accounts. 23.7.1. Prerequisites for Web Application Authentication with Smart Cards On the server where the Apache web application is running: Enroll the server as a client in the Identity Management domain. Install the sssd-dbus and mod_lookup_identity packages. Make sure Apache has a working HTTPS connection configured using the mod_nss module. 23.7.2. Configuring Identity Management Smart-card Authentication for a Web Application Enable TLS renegotiation in the mod_nss configuration in the /etc/httpd/conf.d/nss.conf file: Make sure that the CA issuing the user certificates is trusted for the client certificates in the mod_nss certificate database. The default location for the database is /etc/httpd/alias . Add the web application. In this procedure, we are using an almost minimal example consisting of a login page and a protected area. The /login end point only lets the user provide a user name and sends the user to a protected part of the application. The /app end point checks the REMOTE_USER environment variable. If the login was successful, the variable contains the ID of the logged-in user. Otherwise, the variable is unset. Create a directory, and set its group to apache and the mode to at least 750 . In this procedure, we are using a directory named /var/www/app/ . Create a file, and set its group to apache and the mode to at least 750 . In this procedure, we are using a file named /var/www/app/login.py . Save the following contents to the file: Create a file, and set its group to apache and the mode to at least 750 . In this procedure, we are using a file named /var/www/app/protected.py . Save the following contents in the file: Create a configuration file for your application. In this procedure, we are using a file named /etc/httpd/conf.d/app.conf with the following contents: In this file: The first part loads mod_lookup_identity if it is not already loaded. The part maps the /login and /app end points to the respective Web Server Gateway Interface (WSGI) scripts. The last part configures mod_nss for the /app end point so that it requires a client certificate during the TLS handshake and uses it. In addition, it configures an optional request parameter username to look up the identity of the user.
[ "NSSRenegotiation NSSRequireSafeNegotiation on", "#! /usr/bin/env python def application(environ, start_response): status = '200 OK' response_body = \"\"\" <!DOCTYPE html> <html> <head> <title>Login</title> </head> <body> <form action='/app' method='get'> Username: <input type='text' name='username'> <input type='submit' value='Login with certificate'> </form> </body> </html> \"\"\" response_headers = [ ('Content-Type', 'text/html'), ('Content-Length', str(len(response_body))) ] start_response(status, response_headers) return [response_body]", "#! /usr/bin/env python def application(environ, start_response): try: user = environ['REMOTE_USER'] except KeyError: status = '400 Bad Request' response_body = 'Login failed.\\n' else: status = '200 OK' response_body = 'Login succeeded. Username: {}\\n'.format(user) response_headers = [ ('Content-Type', 'text/plain'), ('Content-Length', str(len(response_body))) ] start_response(status, response_headers) return [response_body]", "<IfModule !lookup_identity_module> LoadModule lookup_identity_module modules/mod_lookup_identity.so </IfModule> WSGIScriptAlias /login /var/www/app/login.py WSGIScriptAlias /app /var/www/app/protected.py <Location \"/app\"> NSSVerifyClient require NSSUserName SSL_CLIENT_CERT LookupUserByCertificate On LookupUserByCertificateParamName \"username\" </Location>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/sc-integration-web-apps
Chapter 8. Summary
Chapter 8. Summary This document has provided only a general introduction to security for Red Hat Ceph Storage. Contact the Red Hat Ceph Storage consulting team for additional help.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/data_security_and_hardening_guide/con-sec-summay-sec
Chapter 4. Packaging software
Chapter 4. Packaging software In the following sections, learn the basics of the packaging process with the RPM package manager. 4.1. Setting up RPM packaging workspace To build RPM packages, you must first create a special workspace that consists of directories used for different packaging purposes. 4.1.1. Configuring RPM packaging workspace To configure the RPM packaging workspace, you can set up a directory layout by using the rpmdev-setuptree utility. Prerequisites You installed the rpmdevtools package, which provides utilities for packaging RPMs: Procedure Run the rpmdev-setuptree utility: Additional resources RPM packaging workspace directories 4.1.2. RPM packaging workspace directories The following are the RPM packaging workspace directories created by using the rpmdev-setuptree utility: Table 4.1. RPM packaging workspace directories Directory Purpose BUILD Contains build artifacts compiled from the source files from the SOURCES directory. RPMS Binary RPMs are created under the RPMS directory in subdirectories for different architectures. For example, in the x86_64 or noarch subdirectory. SOURCES Contains compressed source code archives and patches. The rpmbuild command then searches for these archives and patches in this directory. SPECS Contains spec files created by the packager. These files are then used for building packages. SRPMS When you use the rpmbuild command to build an SRPM instead of a binary RPM, the resulting SRPM is created under this directory. 4.2. About spec files A spec file is a file with instructions that the rpmbuild utility uses to build an RPM package. This file provides necessary information to the build system by defining instructions in a series of sections. These sections are defined in the Preamble and the Body part of the spec file: The Preamble section contains a series of metadata items that are used in the Body section. The Body section represents the main part of the instructions. 4.2.1. Preamble items The following are some of the directives that you can use in the Preamble section of the RPM spec file. Table 4.2. The Preamble section directives Directive Definition Name A base name of the package that must match the spec file name. Version An upstream version number of the software. Release The number of times the version of the package was released. Set the initial value to 1%{?dist} and increase it with each new release of the package. Reset to 1 when a new Version of the software is built. Summary A brief one-line summary of the package. License A license of the software being packaged. The exact format for how to label the License in your spec file varies depending on which RPM-based Linux distribution guidelines you are following, for example, GPLv3+ . URL A full URL for more information about the software, for example, an upstream project website for the software being packaged. Source A path or URL to the compressed archive of the unpatched upstream source code. This link must point to an accessible and reliable storage of the archive, for example, the upstream page, not the packager's local storage. You can apply the Source directive either with or without numbers at the end of the directive name. If there is no number given, the number is assigned to the entry internally. You can also give the numbers explicitly, for example, Source0 , Source1 , Source2 , Source3 , and so on. Patch A name of the first patch to apply to the source code, if necessary. You can apply the Patch directive either with or without numbers at the end of the directive name. If there is no number given, the number is assigned to the entry internally. You can also give the numbers explicitly, for example, Patch0 , Patch1 , Patch2 , Patch3 , and so on. You can apply the patches individually by using the %patch0 , %patch1 , %patch2 macro, and so on. Macros are applied within the %prep directive in the Body section of the RPM spec file. Alternatively, you can use the %autopatch macro that automatically applies all patches in the order they are given in the spec file. BuildArch An architecture that the software will be built for. If the software is not architecture-dependent, for example, if you wrote the software entirely in an interpreted programming language, set the value to BuildArch: noarch . If you do not set this value, the software automatically inherits the architecture of the machine on which it is built, for example, x86_64 . BuildRequires A comma- or whitespace-separated list of packages required to build the program written in a compiled language. There can be multiple entries of BuildRequires , each on its own line in the SPEC file. Requires A comma- or whitespace-separated list of packages required by the software to run once installed. There can be multiple entries of Requires , each on its own line in the spec file. ExcludeArch If a piece of software cannot operate on a specific processor architecture, you can exclude this architecture in the ExcludeArch directive. Conflicts A comma- or whitespace-separated list of packages that must not be installed on the system in order for your software to function properly when installed. There can be multiple entries of Conflicts , each on its own line in the spec file. Obsoletes The Obsoletes directive changes the way updates work depending on the following factors: If you use the rpm command directly on a command line, it removes all packages that match obsoletes of packages being installed, or the update is performed by an updates or dependency solver. If you use the updates or dependency resolver ( YUM ), packages containing matching Obsoletes: are added as updates and replace the matching packages. Provides If you add the Provides directive to the package, this package can be referred to by dependencies other than its name. The Name , Version , and Release ( NVR ) directives comprise the file name of the RPM package in the name-version-release format. You can display the NVR information for a specific package by querying RPM database by using the rpm command, for example: Here, bash is the package name, 4.4.19 is the version, and 7.el8 is the release. The x86_64 marker is the package architecture. Unlike NVR , the architecture marker is not under direct control of the RPM packager, but is defined by the rpmbuild build environment. The exception to this is the architecture-independent noarch package. 4.2.2. Body items The following are the items used in the Body section of the RPM spec file. Table 4.3. The Body section items Directive Definition %description A full description of the software packaged in the RPM. This description can span multiple lines and can be broken into paragraphs. %prep A command or series of commands to prepare the software for building, for example, for unpacking the archive in the Source directive. The %prep directive can contain a shell script. %build A command or series of commands for building the software into machine code (for compiled languages) or bytecode (for some interpreted languages). %install A command or series of commands that the rpmbuild utility will use to install the software into the BUILDROOT directory once the software has been built. These commands copy the desired build artifacts from the %_builddir directory, where the build happens, to the %buildroot directory that contains the directory structure with the files to be packaged. This includes copying files from ~/rpmbuild/BUILD to ~/rpmbuild/BUILDROOT and creating the necessary directories in ~/rpmbuild/BUILDROOT . The %install directory is an empty chroot base directory, which resembles the end user's root directory. Here you can create any directories that will contain the installed files. To create such directories, you can use RPM macros without having to hardcode the paths. Note that %install is only run when you create a package, not when you install it. For more information, see Working with spec files . %check A command or series of commands for testing the software, for example, unit tests. %files A list of files, provided by the RPM package, to be installed in the user's system and their full path location on the system. During the build, if there are files in the %buildroot directory that are not listed in %files , you will receive a warning about possible unpackaged files. Within the %files section, you can indicate the role of various files by using built-in macros. This is useful for querying the package file manifest metadata by using the rpm command. For example, to indicate that the LICENSE file is a software license file, use the %license macro. %changelog A record of changes that happened to the package between different Version or Release builds. These changes include a list of date-stamped entries for each Version-Release of the package. These entries log packaging changes, not software changes, for example, adding a patch or changing the build procedure in the %build section. 4.2.3. Advanced items A spec file can contain advanced items, such as Scriptlets or Triggers . Scriptlets and Triggers take effect at different points during the installation process on the end user's system, not the build process. 4.3. BuildRoots In the context of RPM packaging, buildroot is a chroot environment. The build artifacts are placed here by using the same file system hierarchy as the future hierarchy in the end user's system, with buildroot acting as the root directory. The placement of build artifacts must comply with the file system hierarchy standard of the end user's system. The files in buildroot are later put into a cpio archive, which becomes the main part of the RPM. When RPM is installed on the end user's system, these files are extracted in the root directory, preserving the correct hierarchy. Note The rpmbuild program has its own defaults. Overriding these defaults can cause certain issues. Therefore, avoid defining your own value of the buildroot macro. Use the default %{buildroot} macro instead. 4.4. RPM macros An rpm macro is a straight text substitution that can be conditionally assigned based on the optional evaluation of a statement when certain built-in functionality is used. Therefore, RPM can perform text substitutions for you. For example, you can define Version of the packaged software only once in the %{version} macro, and use this macro throughout the spec file. Every occurrence is automatically substituted by Version that you defined in the macro. Note If you see an unfamiliar macro, you can evaluate it with the following command: For example, to evaluate the %{_bindir} and %{_libexecdir} macros, enter: Additional resources More on macros 4.5. Working with spec files To package new software, you must create a spec file. You can create the spec file either of the following ways: Write the new spec file manually from scratch. Use the rpmdev-newspec utility. This utility creates an unpopulated spec file, where you fill the necessary directives and fields. Note Some programmer-focused text editors pre-populate a new spec file with their own spec template. The rpmdev-newspec utility provides an editor-agnostic method. 4.5.1. Creating a new spec file for sample Bash, Python, and C programs You can create a spec file for each of the three implementations of the Hello World! program by using the rpmdev-newspec utility. Prerequisites The following Hello World! program implementations were placed into the ~/rpmbuild/SOURCES directory: bello-0.1.tar.gz pello-0.1.2.tar.gz cello-1.0.tar.gz ( cello-output-first-patch.patch ) Procedure Navigate to the ~/rpmbuild/SPECS directory: Create a spec file for each of the three implementations of the Hello World! program: The ~/rpmbuild/SPECS/ directory now contains three spec files named bello.spec , cello.spec , and pello.spec . Examine the created files. The directives in the files represent those described in About spec files . In the following sections, you will populate particular section in the output files of rpmdev-newspec . 4.5.2. Modifying an original spec file The original output spec file generated by the rpmdev-newspec utility represents a template that you must modify to provide necessary instructions for the rpmbuild utility. rpmbuild then uses these instructions to build an RPM package. Prerequisites The unpopulated ~/rpmbuild/SPECS/<name>.spec spec file was created by using the rpmdev-newspec utility. For more information, see Creating a new spec file for sample Bash, Python, and C programs . Procedure Open the ~/rpmbuild/SPECS/<name>.spec file provided by the rpmdev-newspec utility. Populate the following directives of the spec file Preamble section: Name Name was already specified as an argument to rpmdev-newspec . Version Set Version to match the upstream release version of the source code. Release Release is automatically set to 1%{?dist} , which is initially 1 . Summary Enter a one-line explanation of the package. License Enter the software license associated with the source code. URL Enter the URL to the upstream software website. For consistency, utilize the %{name} RPM macro variable and use the https://example.com/%{name} format. Source Enter the URL to the upstream software source code. Link directly to the software version being packaged. Note The example URLs in this documentation include hard-coded values that could possibly change in the future. Similarly, the release version can change as well. To simplify these potential future changes, use the %{name} and %{version} macros. By using these macros, you need to update only one field in the spec file. BuildRequires Specify build-time dependencies for the package. Requires Specify run-time dependencies for the package. BuildArch Specify the software architecture. Populate the following directives of the spec file Body section. You can think of these directives as section headings, because these directives can define multi-line, multi-instruction, or scripted tasks to occur. %description Enter the full description of the software. %prep Enter a command or series of commands to prepare software for building. %build Enter a command or series of commands for building software. %install Enter a command or series of commands that instruct the rpmbuild command on how to install the software into the BUILDROOT directory. %files Specify the list of files, provided by the RPM package, to be installed on your system. %changelog Enter the list of datestamped entries for each Version-Release of the package. Start the first line of the %changelog section with an asterisk ( * ) character followed by Day-of-Week Month Day Year Name Surname <email> - Version-Release . For the actual change entry, follow these rules: Each change entry can contain multiple items, one for each change. Each item starts on a new line. Each item begins with a hyphen ( - ) character. You have now written an entire spec file for the required program. Additional resources Preamble items Body items An example spec file for a sample Bash program An example spec file for a sample Python program An example spec file for a sample C program Building RPMs 4.5.3. An example spec file for a sample Bash program You can use the following example spec file for the bello program written in bash for your reference. An example spec file for the bello program written in bash The BuildRequires directive, which specifies build-time dependencies for the package, was deleted because there is no building step for bello . Bash is a raw interpreted programming language, and the files are just installed to their location on the system. The Requires directive, which specifies run-time dependencies for the package, includes only bash , because the bello script requires only the bash shell environment to execute. The %build section, which specifies how to build the software, is blank, because the bash script does not need to be built. Note To install bello , you must create the destination directory and install the executable bash script file there. Therefore, you can use the install command in the %install section. You can use RPM macros to do this without hardcoding paths. Additional resources What is source code 4.5.4. An example spec file for a sample Python program You can use the following example spec file for the pello program written in the Python programming language for your reference. An example spec file for the pello program written in Python The Requires directive, which specifies run-time dependencies for the package, includes two packages: The python package required to execute the byte-compiled code at runtime. The bash package required to execute the small entry-point script. The BuildRequires directive, which specifies build-time dependencies for the package, includes only the python package. The pello program requires python to perform the byte-compile build process. The %build section, which specifies how to build the software, creates a byte-compiled version of the script. Note that in real-world packaging, it is usually done automatically, depending on the distribution used. The %install section corresponds to the fact that you must install the byte-compiled file into a library directory on the system so that it can be accessed. This example of creating a wrapper script in-line in the spec file shows that the spec file itself is scriptable. This wrapper script executes the Python byte-compiled code by using the here document . Additional resources What is source code 4.5.5. An example spec file for a sample C program You can use the following example spec file for the cello program that was written in the C programming language for your reference. An example spec file for the cello program written in C The BuildRequires directive, which specifies build-time dependencies for the package, includes the following packages required to perform the compilation build process: gcc make The Requires directive, which specifies run-time dependencies for the package, is omitted in this example. All runtime requirements are handled by rpmbuild , and the cello program does not require anything outside of the core C standard libraries. The %build section reflects the fact that in this example the Makefile file for the cello program was written. Therefore, you can use the GNU make command. However, you must remove the call to %configure because you did not provide a configure script. You can install the cello program by using the %make_install macro. This is possible because the Makefile file for the cello program is available. Additional resources What is source code 4.6. Building RPMs You can build RPM packages by using the rpmbuild command. When using this command, a certain directory and file structure is expected, which is the same as the structure that was set up by the rpmdev-setuptree utility. Different use cases and desired outcomes require different combinations of arguments to the rpmbuild command. The following are the main use cases: Building source RPMs. Building binary RPMs: Rebuilding a binary RPM from a source RPM. Building a binary RPM from the spec file. 4.6.1. Building source RPMs Building a Source RPM (SRPM) has the following advantages: You can preserve the exact source of a certain Name-Version-Release of an RPM file that was deployed to an environment. This includes the exact spec file, the source code, and all relevant patches. This is useful for tracking and debugging purposes. You can build a binary RPM on a different hardware platform or architecture. Prerequisites You have installed the rpmbuild utility on your system: The following Hello World! implementations were placed into the ~/rpmbuild/SOURCES/ directory: bello-0.1.tar.gz pello-0.1.2.tar.gz cello-1.0.tar.gz ( cello-output-first-patch.patch ) A spec file for the program that you want to package exists. Procedure Navigate to the ~/rpmbuild/SPECS/ directive, which contains the created spec file: Build the source RPM by entering the rpmbuild command with the specified spec file: The -bs option stands for the build source . For example, to build source RPMs for the bello , pello , and cello programs, enter: Verification Verify that the rpmbuild/SRPMS directory includes the resulting source RPMs. The directory is a part of the structure expected by rpmbuild . Additional resources Working with spec files Creating a new spec file for sample Bash, C, and Python programs Modifying an original spec file 4.6.2. Rebuilding a binary RPM from a source RPM To rebuild a binary RPM from a source RPM (SRPM), use the rpmbuild command with the --rebuild option. The output generated when creating the binary RPM is verbose, which is helpful for debugging. The output varies for different examples and corresponds to their spec files. The resulting binary RPMs are located in the ~/rpmbuild/RPMS/YOURARCH directory, where YOURARCH is your architecture, or in the ~/rpmbuild/RPMS/noarch/ directory, if the package is not architecture-specific. Prerequisites You have installed the rpmbuild utility on your system: Procedure Navigate to the ~/rpmbuild/SRPMS/ directive, which contains the source RPM: Rebuild the binary RPM from the source RPM: Replace srpm with the name of the source RPM file. For example, to rebuild bello , pello , and cello from their SRPMs, enter: Note Invoking rpmbuild --rebuild involves the following processes: Installing the contents of the SRPM (the spec file and the source code) into the ~/rpmbuild/ directory. Building an RPM by using the installed contents. Removing the spec file and the source code. You can retain the spec file and the source code after building either of the following ways: When building the RPM, use the rpmbuild command with the --recompile option instead of the --rebuild option. Install SRPMs for bello , pello , and cello : 4.6.3. Building a binary RPM from the spec file To build a binary RPM from its spec file, use the rpmbuild command with the -bb option. Prerequisites You have installed the rpmbuild utility on your system: Procedure Navigate to the ~/rpmbuild/SPECS/ directive, which contains spec files: Build the binary RPM from its spec : For example, to build bello , pello , and cello binary RPMs from their spec files, enter: 4.7. Checking RPMs for common errors After creating a package, you might want to check the quality of the package. The main tool for checking package quality is rpmlint . With the rpmlint tool, you can perform the following actions: Improve RPM maintainability. Enable content validation by performing static analysis of the RPM. Enable error checking by performing static analysis of the RPM. You can use rpmlint to check binary RPMs, source RPMs (SRPMs), and spec files. Therefore, this tool is useful for all stages of packaging. Note that rpmlint has strict guidelines. Therefore, it is sometimes acceptable to skip some of its errors and warnings as shown in the following sections. Note In the examples described in the following sections, rpmlint is run without any options, which produces a non-verbose output. For detailed explanations of each error or warning, run rpmlint -i instead. 4.7.1. Checking a sample Bash program for common errors In the following sections, investigate possible warnings and errors that can occur when checking an RPM for common errors on the example of the bello spec file and bello binary RPM. 4.7.1.1. Checking the bello spec file for common errors Inspect the outputs of the following examples to learn how to check a bello spec file for common errors. Output of running the rpmlint command on the bello spec file For bello.spec , there is only one invalid-url Source0 warning. This warning means that the URL listed in the Source0 directive is unreachable. This is expected, because the specified example.com URL does not exist. Assuming that this URL will be valid in the future, you can ignore this warning. Output of running the rpmlint command on the bello SRPM For the bello SRPM, there is a new invalid-url URL warning that means that the URL specified in the URL directive is unreachable. Assuming that this URL will be valid in the future, you can ignore this warning. 4.7.1.2. Checking the bello binary RPM for common errors When checking binary RPMs, the rpmlint command checks the following items: Documentation Manual pages Consistent use of the filesystem hierarchy standard Inspect the outputs of the following example to learn how to check a bello binary RPM for common errors. Output of running the rpmlint command on the bello binary RPM The no-documentation and no-manual-page-for-binary warnings mean that the RPM has no documentation or manual pages, because you did not provide any. Apart from the output warnings, the RPM passed rpmlint checks. 4.7.2. Checking a sample Python program for common errors In the following sections, investigate possible warnings and errors that can occur when validating RPM content on the example of the pello spec file and pello binary RPM. 4.7.2.1. Checking the pello spec file for common errors Inspect the outputs of the following examples to learn how to check a pello spec file for common errors. Output of running the rpmlint command on the pello spec file The invalid-url Source0 warning means that the URL listed in the Source0 directive is unreachable. This is expected, because the specified example.com URL does not exist. Assuming that this URL will be valid in the future, you can ignore this warning. The hardcoded-library-path errors suggest using the %{_libdir} macro instead of hard-coding the library path. For the sake of this example, you can safely ignore these errors. However, for packages going into production, check all errors carefully. Output of running the rpmlint command on the SRPM for pello The invalid-url URL error means that the URL mentioned in the URL directive is unreachable. Assuming that this URL will be valid in the future, you can ignore this warning. 4.7.2.2. Checking the pello binary RPM for common errors When checking binary RPMs, the rpmlint command checks the following items: Documentation Manual pages Consistent use of the Filesystem Hierarchy Standard Inspect the outputs of the following example to learn how to check a pello binary RPM for common errors. Output of running the rpmlint command on the pello binary RPM The no-documentation and no-manual-page-for-binary warnings mean that the RPM has no documentation or manual pages because you did not provide any. The only-non-binary-in-usr-lib warning means that you provided only non-binary artifacts in the /usr/lib/ directory. This directory is typically used for shared object files, which are binary files. Therefore, rpmlint expects at least one or more files in /usr/lib/ to be binary. This is an example of an rpmlint check for compliance with Filesystem Hierarchy Standard. To ensure the correct placement of files, use RPM macros. For the sake of this example, you can safely ignore this warning. The non-executable-script error means that the /usr/lib/pello/pello.py file has no execute permissions. The rpmlint tool expects the file to be executable because the file contains the shebang ( #! ). For the purpose of this example, you can leave this file without execute permissions and ignore this error. Apart from the output warnings and errors, the RPM passed rpmlint checks. 4.7.3. Checking a sample C program for common errors In the following sections, investigate possible warnings and errors that can occur when validating RPM content on the example of the cello spec file and cello binary RPM. 4.7.3.1. Checking the cello spec file for common errors Inspect the outputs of the following examples to learn how to check a cello spec file for common errors. Output of running the rpmlint command on the cello spec file For cello.spec , there is only one invalid-url Source0 warning. This warning means that the URL listed in the Source0 directive is unreachable. This is expected because the specified example.com URL does not exist. Assuming that this URL will be valid in the future, you can ignore this warning. Output of running the rpmlint command on the cello SRPM For the cello SRPM, there is a new invalid-url URL warning. This warning means that the URL specified in the URL directive is unreachable. Assuming that this URL will be valid in the future, you can ignore this warning. 4.7.3.2. Checking the cello binary RPM for common errors When checking binary RPMs, the rpmlint command checks the following items: Documentation Manual pages Consistent use of the filesystem hierarchy standard Inspect the outputs of the following example to learn how to check a cello binary RPM for common errors. Output of running the rpmlint command on the cello binary RPM The no-documentation and no-manual-page-for-binary warnings mean that the RPM has no documentation or manual pages because you did not provide any. Apart from the output warnings, the RPM passed rpmlint checks. 4.8. Logging RPM activity to syslog You can log any RPM activity or transaction by using the System Logging protocol ( syslog ). Prerequisites The syslog plug-in is installed on the system: Note The default location for the syslog messages is the /var/log/messages file. However, you can configure syslog to use another location to store the messages. Procedure Open the file that you configured to store the syslog messages. Alternatively, if you use the default syslog configuration, open the /var/log/messages file. Search for new lines including the [RPM] string. 4.9. Extracting RPM content In some cases, for example, if a package required by RPM is damaged, you might need to extract the content of the package. In such cases, if an RPM installation is still working despite the damage, you can use the rpm2archive utility to convert an .rpm file to a tar archive to use the content of the package. Note If the RPM installation is severely damaged, you can use the rpm2cpio utility to convert the RPM package file to a cpio archive. Procedure Convert the RPM file to the tar archive: The resulting file has the .tgz suffix. For example, to create an archive from the bash package, enter:
[ "yum install rpmdevtools", "rpmdev-setuptree tree ~/rpmbuild/ /home/user/rpmbuild/ |-- BUILD |-- RPMS |-- SOURCES |-- SPECS `-- SRPMS 5 directories, 0 files", "rpm -q bash bash-4.4.19-7.el8.x86_64", "rpm --eval %{MACRO}", "rpm --eval %{_bindir} /usr/bin rpm --eval %{_libexecdir} /usr/libexec", "cd ~/rpmbuild/SPECS", "rpmdev-newspec bello bello.spec created; type minimal, rpm version >= 4.11. rpmdev-newspec cello cello.spec created; type minimal, rpm version >= 4.11. rpmdev-newspec pello pello.spec created; type minimal, rpm version >= 4.11.", "Name: bello Version: 0.1 Release: 1%{?dist} Summary: Hello World example implemented in bash script License: GPLv3+ URL: https://www.example.com/%{name} Source0: https://www.example.com/%{name}/releases/%{name}-%{version}.tar.gz Requires: bash BuildArch: noarch %description The long-tail description for our Hello World Example implemented in bash script. %prep %setup -q %build %install mkdir -p %{buildroot}/%{_bindir} install -m 0755 %{name} %{buildroot}/%{_bindir}/%{name} %files %license LICENSE %{_bindir}/%{name} %changelog * Tue May 31 2016 Adam Miller <[email protected]> - 0.1-1 - First bello package - Example second item in the changelog for version-release 0.1-1", "Name: pello Version: 0.1.1 Release: 1%{?dist} Summary: Hello World example implemented in Python License: GPLv3+ URL: https://www.example.com/%{name} Source0: https://www.example.com/%{name}/releases/%{name}-%{version}.tar.gz BuildRequires: python Requires: python Requires: bash BuildArch: noarch %description The long-tail description for our Hello World Example implemented in Python. %prep %setup -q %build python -m compileall %{name}.py %install mkdir -p %{buildroot}/%{_bindir} mkdir -p %{buildroot}/usr/lib/%{name} cat > %{buildroot}/%{_bindir}/%{name} <<EOF #!/bin/bash /usr/bin/python /usr/lib/%{name}/%{name}.pyc EOF chmod 0755 %{buildroot}/%{_bindir}/%{name} install -m 0644 %{name}.py* %{buildroot}/usr/lib/%{name}/ %files %license LICENSE %dir /usr/lib/%{name}/ %{_bindir}/%{name} /usr/lib/%{name}/%{name}.py* %changelog * Tue May 31 2016 Adam Miller <[email protected]> - 0.1.1-1 - First pello package", "Name: cello Version: 1.0 Release: 1%{?dist} Summary: Hello World example implemented in C License: GPLv3+ URL: https://www.example.com/%{name} Source0: https://www.example.com/%{name}/releases/%{name}-%{version}.tar.gz Patch0: cello-output-first-patch.patch BuildRequires: gcc BuildRequires: make %description The long-tail description for our Hello World Example implemented in C. %prep %setup -q %patch0 %build make %{?_smp_mflags} %install %make_install %files %license LICENSE %{_bindir}/%{name} %changelog * Tue May 31 2016 Adam Miller <[email protected]> - 1.0-1 - First cello package", "yum install rpm-build", "cd ~/rpmbuild/SPECS/", "rpmbuild -bs <specfile>", "rpmbuild -bs bello.spec Wrote: /home/admiller/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm rpmbuild -bs pello.spec Wrote: /home/admiller/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm rpmbuild -bs cello.spec Wrote: /home/admiller/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm", "yum install rpm-build", "cd ~/rpmbuild/SRPMS/", "rpmbuild --rebuild <srpm>", "rpmbuild --rebuild bello-0.1-1.el8.src.rpm [output truncated] rpmbuild --rebuild pello-0.1.2-1.el8.src.rpm [output truncated] rpmbuild --rebuild cello-1.0-1.el8.src.rpm [output truncated]", "rpm -Uvh ~/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm Updating / installing... 1:bello-0.1-1.el8 [100%] rpm -Uvh ~/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm Updating / installing... ... 1:pello-0.1.2-1.el8 [100%] rpm -Uvh ~/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm Updating / installing... ... 1:cello-1.0-1.el8 [100%]", "yum install rpm-build", "cd ~/rpmbuild/SPECS/", "rpmbuild -bb <spec_file>", "rpmbuild -bb bello.spec rpmbuild -bb pello.spec rpmbuild -bb cello.spec", "rpmlint bello.spec bello.spec: W: invalid-url Source0 : https://www.example.com/bello/releases/bello-0.1.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 0 errors, 1 warnings.", "rpmlint ~/rpmbuild/SRPMS/bello-0.1-1.el8.src.rpm bello.src: W: invalid-url URL : https://www.example.com/bello HTTP Error 404: Not Found bello.src: W: invalid-url Source0: https://www.example.com/bello/releases/bello-0.1.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 0 errors, 2 warnings.", "rpmlint ~/rpmbuild/RPMS/noarch/bello-0.1-1.el8.noarch.rpm bello.noarch: W: invalid-url URL: https://www.example.com/bello HTTP Error 404: Not Found bello.noarch: W: no-documentation bello.noarch: W: no-manual-page-for-binary bello 1 packages and 0 specfiles checked; 0 errors, 3 warnings.", "rpmlint pello.spec pello.spec:30: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name} pello.spec:34: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.pyc pello.spec:39: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name}/ pello.spec:43: E: hardcoded-library-path in /usr/lib/%{name}/ pello.spec:45: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.py* pello.spec: W: invalid-url Source0 : https://www.example.com/pello/releases/pello-0.1.2.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 5 errors, 1 warnings.", "rpmlint ~/rpmbuild/SRPMS/pello-0.1.2-1.el8.src.rpm pello.src: W: invalid-url URL : https://www.example.com/pello HTTP Error 404: Not Found pello.src:30: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name} pello.src:34: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.pyc pello.src:39: E: hardcoded-library-path in %{buildroot}/usr/lib/%{name}/ pello.src:43: E: hardcoded-library-path in /usr/lib/%{name}/ pello.src:45: E: hardcoded-library-path in /usr/lib/%{name}/%{name}.py* pello.src: W: invalid-url Source0: https://www.example.com/pello/releases/pello-0.1.2.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 5 errors, 2 warnings.", "rpmlint ~/rpmbuild/RPMS/noarch/pello-0.1.2-1.el8.noarch.rpm pello.noarch: W: invalid-url URL: https://www.example.com/pello HTTP Error 404: Not Found pello.noarch: W: only-non-binary-in-usr-lib pello.noarch: W: no-documentation pello.noarch: E: non-executable-script /usr/lib/pello/pello.py 0644L /usr/bin/env pello.noarch: W: no-manual-page-for-binary pello 1 packages and 0 specfiles checked; 1 errors, 4 warnings.", "rpmlint ~/rpmbuild/SPECS/cello.spec /home/admiller/rpmbuild/SPECS/cello.spec: W: invalid-url Source0 : https://www.example.com/cello/releases/cello-1.0.tar.gz HTTP Error 404: Not Found 0 packages and 1 specfiles checked; 0 errors, 1 warnings.", "rpmlint ~/rpmbuild/SRPMS/cello-1.0-1.el8.src.rpm cello.src: W: invalid-url URL : https://www.example.com/cello HTTP Error 404: Not Found cello.src: W: invalid-url Source0: https://www.example.com/cello/releases/cello-1.0.tar.gz HTTP Error 404: Not Found 1 packages and 0 specfiles checked; 0 errors, 2 warnings.", "rpmlint ~/rpmbuild/RPMS/x86_64/cello-1.0-1.el8.x86_64.rpm cello.x86_64: W: invalid-url URL: https://www.example.com/cello HTTP Error 404: Not Found cello.x86_64: W: no-documentation cello.x86_64: W: no-manual-page-for-binary cello 1 packages and 0 specfiles checked; 0 errors, 3 warnings.", "yum install rpm-plugin-syslog", "rpm2archive <filename>.rpm", "rpm2archive bash-4.4.19-6.el8.x86_64.rpm ls bash-4.4.19-6.el8.x86_64.rpm.tgz bash-4.4.19-6.el8.x86_64.rpm.tgz" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/packaging_and_distributing_software/packaging-software_packaging-and-distributing-software
Chapter 8. October 2023
Chapter 8. October 2023 8.1. Changes to settings We made several changes to the UI that affect how to update your cost management settings. Before, you updated your settings in the Red Hat Hybrid Cloud Console settings page. With this update, cost management has its own settings page . 8.1.1. Configuring cost management You can configure cost management from a few different locations on Red Hat Hybrid Cloud Console : From the Red Hat Hybrid Cloud Console Settings , you can configure the following: Adding and editing cloud integrations Notifications From the Identity & Access Management settings, you can configure the following: User access Authentication policy From the cost management settings , you can configure the following: Enablement, grouping, and filtering of Tags and Labels Enablement of AWS cost categories Your preferred currency calculation Your preferred savings plan or subscription fee calculation for AWS
null
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/whats_new_in_cost_management/october_2023
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q2/html/deploying_and_upgrading_amq_streams_on_openshift/making-open-source-more-inclusive
D.21. Cheat Sheets View
D.21. Cheat Sheets View To open Cheat Sheets View , click the main menu's Window > Show View > Other... and then click the Help > Cheat Sheets view in the dialog. The Cheat Sheets view is a standard Eclipse Help concept. Cheat Sheets provide step by step assistance for common process workflows. Teiid Designer has contributed to the Eclipse help framework to provide assistance for many common modeling tasks. The Guides View (see Guides View) provides links to these Cheat Sheets, as previously described. A sample Cheat Sheet is shown below: Figure D.33. Cheat Sheet Sample
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/cheat_sheets_view
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/making-open-source-more-inclusive
12.2. Red Hat Gluster Storage Component Logs and Location
12.2. Red Hat Gluster Storage Component Logs and Location The table lists the component, services, and functionality based logs in the Red Hat Gluster Storage Server. As per the File System Hierarchy Standards (FHS) all the log files are placed in the /var/log directory. Table 12.1. Component/Service Name Location of the Log File Remarks glusterd /var/log/glusterfs/glusterd.log One glusterd log file per server. This log file also contains the snapshot and user logs. gluster commands /var/log/glusterfs/cmd_history.log Gluster commands executed on a node in a Red Hat Gluster Storage Trusted Storage Pool is logged in this file. bricks /var/log/glusterfs/bricks/ <path extraction of brick path> .log One log file per brick on the server rebalance /var/log/glusterfs/ VOLNAME -rebalance.log One log file per volume on the server self heal deamon /var/log/glusterfs/glustershd.log One log file per server quota (Deprecated) See Chapter 9, Managing Directory Quotas for more details. /var/log/glusterfs/quotad.log Log of the quota daemons running on each node. /var/log/glusterfs/quota-crawl.log Whenever quota is enabled, a file system crawl is performed and the corresponding log is stored in this file /var/log/glusterfs/quota-mount- VOLNAME .log An auxiliary FUSE client is mounted in <gluster-run-dir>/ VOLNAME of the glusterFS and the corresponding client logs found in this file. One log file per server (and per volume from quota-mount. Gluster NFS (Deprecated) /var/log/glusterfs/nfs.log One log file per server SAMBA Gluster /var/log/samba/glusterfs- VOLNAME -<ClientIP>.log If the client mounts this on a glusterFS server node, the actual log file or the mount point may not be found. In such a case, the mount outputs of all the glusterFS type mount operations need to be considered. NFS - Ganesha /var/log/ganesha/ganesha.log , /var/log/ganesha/ganesha-gfapi.log One log file per server FUSE Mount /var/log/ glusterfs/ <mountpoint path extraction> .log Geo-replication /var/log/glusterfs/geo-replication/<master> /var/log/glusterfs/geo-replication-slaves gluster volume heal VOLNAME info command /var/log/glusterfs/glfsheal- VOLNAME .log One log file per server on which the command is executed. SwiftKrbAuth (Deprecated) /var/log/httpd/error_log Command Line Interface logs /var/log/glusterfs/ cli .log This file captures log entries for every command that is executed on the Command Line Interface(CLI).
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/red_hat_storage_component_logs_and_location
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/service_telemetry_framework_release_notes_1.5/proc_providing-feedback-on-red-hat-documentation
Chapter 1. Understanding image builds
Chapter 1. Understanding image builds 1.1. Builds A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process. OpenShift Container Platform uses Kubernetes by creating containers from build images and pushing them to a container image registry. Build objects share common characteristics including inputs for a build, the requirement to complete a build process, logging the build process, publishing resources from successful builds, and publishing the final status of the build. Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time. The OpenShift Container Platform build system provides extensible support for build strategies that are based on selectable types specified in the build API. There are three primary build strategies available: Docker build Source-to-image (S2I) build Custom build By default, docker builds and S2I builds are supported. The resulting object of a build depends on the builder used to create it. For docker and S2I builds, the resulting objects are runnable images. For custom builds, the resulting objects are whatever the builder image author has specified. Additionally, the pipeline build strategy can be used to implement sophisticated workflows: Continuous integration Continuous deployment 1.1.1. Docker build OpenShift Container Platform uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 1.1.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 1.1.3. Custom build The custom build strategy allows developers to define a specific builder image responsible for the entire build process. Using your own builder image allows you to customize your build process. A custom builder image is a plain container image embedded with build process logic, for example for building RPMs or base images. Custom builds run with a high level of privilege and are not available to users by default. Only users who can be trusted with cluster administration permissions should be granted access to run custom builds. 1.1.4. Pipeline build Important The Pipeline build strategy is deprecated in OpenShift Container Platform 4. Equivalent and improved functionality is present in the OpenShift Container Platform Pipelines based on Tekton. Jenkins images on OpenShift Container Platform are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by OpenShift Container Platform in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/builds_using_buildconfig/understanding-image-builds
Chapter 11. Using the JCache API
Chapter 11. Using the JCache API Data Grid provides an implementation of the JCache (JSR-107) API that specifies a standard Java API for caching temporary Java objects in memory. Caching Java objects can help get around bottlenecks arising from using data that is expensive to retrieve or data that is hard to calculate. Caching these type of objects in memory can help speed up application performance by retrieving the data directly from memory instead of doing an expensive roundtrip or recalculation. 11.1. Creating embedded caches Prerequisites Ensure that cache-api is on your classpath. Add the following dependency to your pom.xml : <dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-jcache</artifactId> </dependency> Procedure Create embedded caches that use the default JCache API configuration as follows: import javax.cache.*; import javax.cache.configuration.*; // Retrieve the system wide Cache Manager CacheManager cacheManager = Caching.getCachingProvider().getCacheManager(); // Define a named cache with default JCache configuration Cache<String, String> cache = cacheManager.createCache("namedCache", new MutableConfiguration<String, String>()); 11.1.1. Configuring embedded caches Pass the URI for custom Data Grid configuration to the CachingProvider.getCacheManager(URI) call as follows: import java.net.URI; import javax.cache.*; import javax.cache.configuration.*; // Load configuration from an absolute filesystem path URI uri = URI.create("file:///path/to/infinispan.xml"); // Load configuration from a classpath resource // URI uri = this.getClass().getClassLoader().getResource("infinispan.xml").toURI(); // Create a Cache Manager using the above configuration CacheManager cacheManager = Caching.getCachingProvider().getCacheManager(uri, this.getClass().getClassLoader(), null); Warning By default, the JCache API specifies that data should be stored as storeByValue , so that object state mutations outside of operations to the cache, won't have an impact in the objects stored in the cache. Data Grid has so far implemented this using serialization/marshalling to make copies to store in the cache, and that way adhere to the spec. Hence, if using default JCache configuration with Data Grid, data stored must be marshallable. Alternatively, JCache can be configured to store data by reference (just like Data Grid or JDK Collections work). To do that, simply call: Cache<String, String> cache = cacheManager.createCache("namedCache", new MutableConfiguration<String, String>().setStoreByValue(false)); 11.2. Store and retrieve data Even though JCache API does not extend neither java.util.Map not java.util.concurrent.ConcurrentMap , it providers a key/value API to store and retrieve data: import javax.cache.*; import javax.cache.configuration.*; CacheManager cacheManager = Caching.getCachingProvider().getCacheManager(); Cache<String, String> cache = cacheManager.createCache("namedCache", new MutableConfiguration<String, String>()); cache.put("hello", "world"); // Notice that javax.cache.Cache.put(K) returns void! String value = cache.get("hello"); // Returns "world" Contrary to standard java.util.Map , javax.cache.Cache comes with two basic put methods called put and getAndPut. The former returns void whereas the latter returns the value associated with the key. So, the equivalent of java.util.Map.put(K) in JCache is javax.cache.Cache.getAndPut(K) . Tip Even though JCache API only covers standalone caching, it can be plugged with a persistence store, and has been designed with clustering or distribution in mind. The reason why javax.cache.Cache offers two put methods is because standard java.util.Map put call forces implementors to calculate the value. When a persistent store is in use, or the cache is distributed, returning the value could be an expensive operation, and often users call standard java.util.Map.put(K) without using the return value. Hence, JCache users need to think about whether the return value is relevant to them, in which case they need to call javax.cache.Cache.getAndPut(K) , otherwise they can call java.util.Map.put(K, V) which avoids returning the potentially expensive operation of returning the value. 11.3. Comparing java.util.concurrent.ConcurrentMap and javax.cache.Cache APIs Here's a brief comparison of the data manipulation APIs provided by java.util.concurrent.ConcurrentMap and javax.cache.Cache APIs. Operation java.util.concurrent.ConcurrentMap<K, V> javax.cache.Cache<K, V> store and no return N/A void put(K key) store and return value V put(K key) V getAndPut(K key) store if not present V putIfAbsent(K key, V value) boolean putIfAbsent(K key, V value) retrieve V get(Object key) V get(K key) delete if present V remove(Object key) boolean remove(K key) delete and return value V remove(Object key) V getAndRemove(K key) delete conditional boolean remove(Object key, Object value) boolean remove(K key, V oldValue) replace if present V replace(K key, V value) boolean replace(K key, V value) replace and return value V replace(K key, V value) V getAndReplace(K key, V value) replace conditional boolean replace(K key, V oldValue, V newValue) boolean replace(K key, V oldValue, V newValue) Comparing the two APIs, it's obvious to see that, where possible, JCache avoids returning the value to avoid operations doing expensive network or IO operations. This is an overriding principle in the design of JCache API. In fact, there's a set of operations that are present in java.util.concurrent.ConcurrentMap , but are not present in the javax.cache.Cache because they could be expensive to compute in a distributed cache. The only exception is iterating over the contents of the cache: Operation java.util.concurrent.ConcurrentMap<K, V> javax.cache.Cache<K, V> calculate size of cache int size() N/A return all keys in the cache Set<K> keySet() N/A return all values in the cache Collection<V> values() N/A return all entries in the cache Set<Map.Entry<K, V>> entrySet() N/A iterate over the cache use iterator() method on keySet, values or entrySet Iterator<Cache.Entry<K, V>> iterator() 11.4. Clustering JCache instances Data Grid JCache implementation goes beyond the specification in order to provide the possibility to cluster caches using the standard API. Given a Data Grid configuration file configured to replicate caches like this: infinispan.xml <infinispan> <cache-container default-cache="namedCache"> <transport cluster="jcache-cluster" /> <replicated-cache name="namedCache" /> </cache-container> </infinispan> You can create a cluster of caches using this code: import javax.cache.*; import java.net.URI; // For multiple Cache Managers to be constructed with the standard JCache API // and live in the same JVM, either their names, or their classloaders, must // be different. // This example shows how to force their classloaders to be different. // An alternative method would have been to duplicate the XML file and give // it a different name, but this results in unnecessary file duplication. ClassLoader tccl = Thread.currentThread().getContextClassLoader(); CacheManager cacheManager1 = Caching.getCachingProvider().getCacheManager( URI.create("infinispan-jcache-cluster.xml"), new TestClassLoader(tccl)); CacheManager cacheManager2 = Caching.getCachingProvider().getCacheManager( URI.create("infinispan-jcache-cluster.xml"), new TestClassLoader(tccl)); Cache<String, String> cache1 = cacheManager1.getCache("namedCache"); Cache<String, String> cache2 = cacheManager2.getCache("namedCache"); cache1.put("hello", "world"); String value = cache2.get("hello"); // Returns "world" if clustering is working // -- public static class TestClassLoader extends ClassLoader { public TestClassLoader(ClassLoader parent) { super(parent); } }
[ "<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-jcache</artifactId> </dependency>", "import javax.cache.*; import javax.cache.configuration.*; // Retrieve the system wide Cache Manager CacheManager cacheManager = Caching.getCachingProvider().getCacheManager(); // Define a named cache with default JCache configuration Cache<String, String> cache = cacheManager.createCache(\"namedCache\", new MutableConfiguration<String, String>());", "import java.net.URI; import javax.cache.*; import javax.cache.configuration.*; // Load configuration from an absolute filesystem path URI uri = URI.create(\"file:///path/to/infinispan.xml\"); // Load configuration from a classpath resource // URI uri = this.getClass().getClassLoader().getResource(\"infinispan.xml\").toURI(); // Create a Cache Manager using the above configuration CacheManager cacheManager = Caching.getCachingProvider().getCacheManager(uri, this.getClass().getClassLoader(), null);", "Cache<String, String> cache = cacheManager.createCache(\"namedCache\", new MutableConfiguration<String, String>().setStoreByValue(false));", "import javax.cache.*; import javax.cache.configuration.*; CacheManager cacheManager = Caching.getCachingProvider().getCacheManager(); Cache<String, String> cache = cacheManager.createCache(\"namedCache\", new MutableConfiguration<String, String>()); cache.put(\"hello\", \"world\"); // Notice that javax.cache.Cache.put(K) returns void! String value = cache.get(\"hello\"); // Returns \"world\"", "<infinispan> <cache-container default-cache=\"namedCache\"> <transport cluster=\"jcache-cluster\" /> <replicated-cache name=\"namedCache\" /> </cache-container> </infinispan>", "import javax.cache.*; import java.net.URI; // For multiple Cache Managers to be constructed with the standard JCache API // and live in the same JVM, either their names, or their classloaders, must // be different. // This example shows how to force their classloaders to be different. // An alternative method would have been to duplicate the XML file and give // it a different name, but this results in unnecessary file duplication. ClassLoader tccl = Thread.currentThread().getContextClassLoader(); CacheManager cacheManager1 = Caching.getCachingProvider().getCacheManager( URI.create(\"infinispan-jcache-cluster.xml\"), new TestClassLoader(tccl)); CacheManager cacheManager2 = Caching.getCachingProvider().getCacheManager( URI.create(\"infinispan-jcache-cluster.xml\"), new TestClassLoader(tccl)); Cache<String, String> cache1 = cacheManager1.getCache(\"namedCache\"); Cache<String, String> cache2 = cacheManager2.getCache(\"namedCache\"); cache1.put(\"hello\", \"world\"); String value = cache2.get(\"hello\"); // Returns \"world\" if clustering is working // -- public static class TestClassLoader extends ClassLoader { public TestClassLoader(ClassLoader parent) { super(parent); } }" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/embedding_data_grid_in_java_applications/jcache
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_in_external_mode/making-open-source-more-inclusive
probe::signal.send.return
probe::signal.send.return Name probe::signal.send.return - Signal being sent to a process completed (deprecated in SystemTap 2.1) Synopsis signal.send.return Values shared Indicates whether the sent signal is shared by the thread group. name The name of the function used to send out the signal retstr The return value to either __group_send_sig_info, specific_send_sig_info, or send_sigqueue send2queue Indicates whether the sent signal was sent to an existing sigqueue Context The signal's sender. (correct?) Description Possible __group_send_sig_info and specific_send_sig_info return values are as follows; 0 -- The signal is successfully sent to a process, which means that, (1) the signal was ignored by the receiving process, (2) this is a non-RT signal and the system already has one queued, and (3) the signal was successfully added to the sigqueue of the receiving process. -EAGAIN -- The sigqueue of the receiving process is overflowing, the signal was RT, and the signal was sent by a user using something other than kill . Possible send_group_sigqueue and send_sigqueue return values are as follows; 0 -- The signal was either successfully added into the sigqueue of the receiving process, or a SI_TIMER entry is already queued (in which case, the overrun count will be simply incremented). 1 -- The signal was ignored by the receiving process. -1 -- (send_sigqueue only) The task was marked exiting, allowing * posix_timer_event to redirect it to the group leader.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-signal-send-return
Chapter 5. Connecting to AMQ Management Console for an Operator-based broker deployment
Chapter 5. Connecting to AMQ Management Console for an Operator-based broker deployment Each broker Pod in an Operator-based deployment hosts its own instance of AMQ Management Console at port 8161. To provide access to the console for each broker, you can configure the Custom Resource (CR) instance for the broker deployment to instruct the Operator to automatically create a dedicated Service and Route for each broker Pod. The following procedures describe how to connect to AMQ Management Console for a deployed broker. Prerequisites You must have created a broker deployment using the AMQ Broker Operator. For example, to learn how to use a sample CR to create a basic broker deployment, see Section 3.4.1, "Deploying a basic broker instance" . To instruct the Operator to automatically create a Service and Route for each broker Pod in a deployment for console access, you must set the value of the console.expose property to true in the Custom Resource (CR) instance used to create the deployment. The default value of this property is false . For a complete Custom Resource configuration reference, including configuration of the console section of the CR, see Section 11.1, "Custom Resource configuration reference" . 5.1. Connecting to AMQ Management Console When you set the value of the console.expose property to true in the Custom Resource (CR) instance used to create a broker deployment, the Operator automatically creates a dedicated Service and Route for each broker Pod, to provide access to AMQ Management Console. The default name of the automatically-created Service is in the form <custom-resource-name> -wconsj- <broker-pod-ordinal> -svc . For example, my-broker-deployment-wconsj-0-svc . The default name of the automatically-created Route is in the form <custom-resource-name> -wconsj- <broker-pod-ordinal> -svc-rte . For example, my-broker-deployment-wconsj-0-svc-rte . This procedure shows you how to access the console for a running broker Pod. Procedure In the OpenShift Container Platform web console, click Networking Routes (OpenShift Container Platform 4.5 or later) or Applications Routes (OpenShift Container Platform 3.11). On the Routes page, identify the wconsj Route for the given broker Pod. For example, my-broker-deployment-wconsj-0-svc-rte . Under Location (OpenShift Container Platform 4.5 or later) or Hostname (OpenShift Container Platform 3.11), click the link that corresponds to the Route. A new tab opens in your web browser. Click the Management Console link. The AMQ Management Console login page opens. To log in to the console, enter the values specified for the adminUser and adminPassword properties in the Custom Resource (CR) instance used to create your broker deployment. If there are no values explicitly specified for adminUser and adminPassword in the CR, follow the instructions in Section 5.2, "Accessing AMQ Management Console login credentials" to retrieve the credentials required to log in to the console. Note Values for adminUser and adminPassword are required to log in to the console only if the requireLogin property of the CR is set to true . This property specifies whether login credentials are required to log in to the broker and the console. If requireLogin is set to false , any user with administrator privileges for the OpenShift project can log in to the console. 5.2. Accessing AMQ Management Console login credentials If you do not specify a value for adminUser and adminPassword in the Custom Resource (CR) instance used for your broker deployment, the Operator automatically generates these credentials and stores them in a secret. The default secret name is in the form <custom-resource-name> -credentials-secret , for example, my-broker-deployment-credentials-secret . Note Values for adminUser and adminPassword are required to log in to the management console only if the requireLogin parameter of the CR is set to true . If requireLogin is set to false , any user with administrator privileges for the OpenShift project can log in to the console. This procedure shows how to access the login credentials. Procedure See the complete list of secrets in your OpenShift project. From the OpenShift Container Platform web console, click Workload Secrets (OpenShift Container Platform 4.5 or later) or Resources Secrets (OpenShift Container Platform 3.11). From the command line: Open the appropriate secret to reveal the Base64-encoded console login credentials. From the OpenShift Container Platform web console, click the secret that includes your broker Custom Resource instance in its name. Click the YAML tab (OpenShift Container Platform 4.5 or later) or Actions Edit YAML (OpenShift Container Platform 3.11). From the command line: To decode a value in the secret, use a command such as the following: Additional resources To learn more about using AMQ Management Console to view and manage brokers, see Managing brokers using AMQ Management Console in Managing AMQ Broker
[ "oc get secrets", "oc edit secret <my-broker-deployment-credentials-secret>", "echo 'dXNlcl9uYW1l' | base64 --decode console_admin" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_amq_broker_on_openshift/assembly-br-connecting-to-console-operator_broker-ocp
Chapter 6. Optional: Installing and modifying Operators
Chapter 6. Optional: Installing and modifying Operators The Assisted Installer can install select Operators for you with default configurations in either the UI or API. If you require advanced options, install the desired Operators after installing the cluster. The Assisted Installer monitors the installation of the selected operators as part of the cluster installation and reports their status. If one or more Operators encounter errors during installation, the Assisted Installer reports that the cluster installation has completed with a warning that one or more operators failed to install. See the sections below for the Operators you can set when installing or modifying a cluster definition using the Assisted Installer UI or API. For full instructions on installing an OpenShift Container Platform cluster, see Installing with the Assisted Installer UI or Installing with the Assisted Installer API respectively. 6.1. Installing Operators When installng Operators using the Assisted Installer UI, select the Operators on the Operators page of the wizard. When installing Operators using the Assisted Installer API, use the POST method in the /v2/clusters endpoint. 6.1.1. Installing OpenShift Virtualization When you configure the cluster, you can enable OpenShift Virtualization . Note Currently, OpenShift Virtualization is not supported on IBM zSystems and IBM Power. If enabled, the Assisted Installer: Validates that your environment meets the prerequisites outlined below. Configures virtual machine storage as follows: For single-node OpenShift clusters version 4.10 and newer, the Assisted Installer configures the hostpath provisioner . For single-node OpenShift clusters on earlier versions, the Assisted Installer configures the Local Storage Operator . For multi-node clusters, the Assisted Installer configures OpenShift Data Foundation. Prerequisites Supported by Red Hat Enterprise Linux (RHEL) 8 Support for Intel 64 or AMD64 CPU extensions Intel Virtualization Technology or AMD-V hardware virtualization extensions enabled NX (no execute) flag enabled Procedure If you are using the Assisted Installer UI: In the Operators step of the wizard, enable the Install OpenShift Virtualization checkbox. If you are using the Assisted Installer API: When registering a new cluster, add the "olm_operators: [{"name": "cnv"}]" statement. Note CNV stands for container-native virtualization. For example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.11", "cpu_architecture" : "x86_64", "base_dns_domain": "example.com", "olm_operators: [{"name": "cnv"}]" "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Additional resources For more details about preparing your cluster for OpenShift Virtualization, see the OpenShift Documentation . 6.1.2. Installing Multicluster Engine (MCE) When you configure the cluster, you can enable the Multicluster Engine (MCE) Operator. The Multicluster Engine (MCE) Operator allows you to install additional clusters from the cluster that you are currently installing. Prerequisites OpenShift version 4.10 and above An additional 4 CPU cores and 16GB of RAM for multi-node OpenShift clusters. An additional 8 CPU cores and 32GB RAM for single-node OpenShift clusters. Storage considerations Prior to installation, you must consider the storage required for managing the clusters to be deployed from the Multicluster Engine. You can choose one of the following scenarios for automating storage: Install OpenShift Data Foundation (ODF) on a multi-node cluster. ODF is the recommended storage for clusters, but requires an additional subscription. For details, see Installing OpenShift Data Foundation in this chapter. Install Logical Volume Management Storage (LVMS) on a single-node OpenShift (SNO) cluster. Install Multicluster Engine on a multi-node cluster without configuring storage. Then configure a storage of your choice and enable the Central Infrastructure Management (CIM) service following the installation. For details, see Additional Resources in this chapter. Procedure If you are using the Assisted Installer UI: In the Operators step of the wizard, enable the Install multicluster engine checkbox. If you are using the Assisted Installer API: When registering a new cluster, use the "olm_operators: [{"name": "mce"}]" statement, for example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.11", "cpu_architecture" : "x86_64" "base_dns_domain": "example.com", "olm_operators: [{"name": "mce"}]", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Post-installation steps To use the Assisted Installer technology with the Multicluster Engine, enable the Central Infrastructure Management service. For details, see Enabling the Central Infrastructure Management service . To deploy OpenShift Container Platform clusters using hosted control planes, configure the hosted control planes. For details, see Hosted Control Planes . Additional resources For Advanced Cluster Management documentation related to the Multicluster Engine (MCE) Operator, see Red Hat Advanced Cluster Mangement for Kubernetes For OpenShift Container Platform documentation related to the Multicluster Engine (MCE) Operator, see Multicluster Engine for Kubernetes Operator . 6.1.3. Installing OpenShift Data Foundation When you configure the cluster, you can enable OpenShift Data Foundation . If enabled, the Assisted Installer: Validates that your environment meets the prerequisites outlined below. It does not validate that the disk devices have been reformatted, which you must verify before starting. Configures the storage to use all available disks. When you enable OpenShift Data Foundation, the Assisted Installer creates a StorageCluster resource that specifies all available disks for use with OpenShift Data Foundation. If a different configuration is desired, modify the configuration after installing the cluster or install the Operator after the cluster is installed. Prerequisites The cluster is a three-node OpenShift cluster or has at least 3 worker nodes. Each host has at least one non-installation disk of at least 25GB. The disk devices you use must be empty. There should be no Physical Volumes (PVs), Volume Groups (VGs), or Logical Volumes (LVs) remaining on the disks. Each host has 6 CPU cores for three-node OpenShift or 8 CPU cores for standard clusters, in addition to other CPU requirements. Each host has 19 GiB RAM, in addition to other RAM requirements. Each host has 2 CPU cores and 5GiB RAM per storage disk in addition to other CPU and RAM requirements. You have assigned control plane or worker roles for each host (and not auto-assign). Procedure If you are using the Assisted Installer UI: In the Operators step of the wizard, enable the Install OpenShift Data Foundation checkbox. If you are using the Assisted Installer API: When registering a new cluster, add the "olm_operators: [{"name": "odf"}]" statement. For example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.11", "cpu_architecture" : "x86_64", "base_dns_domain": "example.com", "olm_operators: [{"name": "odf"}]", "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Additional resources For more details about OpenShift Data Foundation, see the OpenShift Documentation . 6.1.4. Installing Logical Volume Manager Storage When you configure the cluster, you can enable the Logical Volume Manager Storage (LVMS) Operator on single-node OpenShift clusters. Installing the LVMS Operator allows you to dynamically provision local storage. Prerequisites A single-node OpenShift cluster installed with version 4.11 or later At least one non-installation disk One additional CPU core and 400 MB of RAM (1200 MB of RAM for versions earlier than 4.13) Procedure If you are using the Assisted Installer UI: In the Operators step of the wizard, enable the Install Logical Volume Manager Storage checkbox. If you are using the Assisted Installer API: When registering a new cluster, use the olm_operators: [{"name": "lvm"}] statement. For example: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.14", "cpu_architecture" : "x86_64", "base_dns_domain": "example.com", "olm_operators: [{"name": "lvm"}]" "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Additional resources For OpenShift Container Platform documentation related to LVMS, see Persistent storage using LVM Storage . 6.2. Modifying Operators In the Assisted Installer, you can add or remove Operators for a cluster resource that has already been registered as part of a installation step. This is only possible before you start the OpenShift Container Platform installation. To modify the defined Operators: If you are using the Assisted Installer UI, navigate to the Operators page of the wizard and modify your selection. For details, see Installing Operators in this section. If you are using the Assisted Installer API, set the required Operator definition using the PATCH method for the /v2/clusters/{cluster_id} endpoint. Prerequisites You have created a new cluster resource. Procedure Refresh the API token: USD source refresh-token Identify the CLUSTER_ID variable by listing the existing clusters, as follows: USD curl -s https://api.openshift.com/api/assisted-install/v2/clusters -H "Authorization: Bearer USD{API_TOKEN}" | jq '[ .[] | { "name": .name, "id": .id } ]' Sample output [ { "name": "lvmtest", "id": "475358f9-ed3a-442f-ab9e-48fd68bc8188" 1 }, { "name": "mcetest", "id": "b5259f97-be09-430e-b5eb-d78420ee509a" } ] Note 1 The id value is the <cluster_id> . Assign the returned <cluster_id> to the CLUSTER_ID variable and export it: USD export CLUSTER_ID=<cluster_id> Update the cluster with the new Operators: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "olm_operators": [{"name": "mce"}, {"name": "cnv"}], 1 } ' | jq '.id' Note 1 Indicates the Operators to be installed. Valid values include mce , cnv , lvm , and odf . To remove a previously installed Operator, exclude it from the list of values. To remove all previously installed Operators, type "olm_operators": [] . Sample output { <various cluster properties>, "monitored_operators": [ { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "console", "operator_type": "builtin", "status_updated_at": "0001-01-01T00:00:00.000Z", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "cvo", "operator_type": "builtin", "status_updated_at": "0001-01-01T00:00:00.000Z", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "mce", "namespace": "multicluster-engine", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "multicluster-engine", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "cnv", "namespace": "openshift-cnv", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "hco-operatorhub", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "lvm", "namespace": "openshift-local-storage", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "local-storage-operator", "timeout_seconds": 4200 } ], <more cluster properties> Note The output is the description of the new cluster state. The monitored_operators property in the output contains Operators of two types: "operator_type": "builtin" : Operators of this type are an integral part of OpenShift Container Platform. "operator_type": "olm" : Operators of this type are added either manually by a user or automatically due to dependencies. In the example, the lso Operator was added automatically because the cnv Operator requires it.
[ "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.11\", \"cpu_architecture\" : \"x86_64\", \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"cnv\"}]\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.11\", \"cpu_architecture\" : \"x86_64\" \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"mce\"}]\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.11\", \"cpu_architecture\" : \"x86_64\", \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"odf\"}]\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.14\", \"cpu_architecture\" : \"x86_64\", \"base_dns_domain\": \"example.com\", \"olm_operators: [{\"name\": \"lvm\"}]\" \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "source refresh-token", "curl -s https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '[ .[] | { \"name\": .name, \"id\": .id } ]'", "[ { \"name\": \"lvmtest\", \"id\": \"475358f9-ed3a-442f-ab9e-48fd68bc8188\" 1 }, { \"name\": \"mcetest\", \"id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\" } ]", "export CLUSTER_ID=<cluster_id>", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"olm_operators\": [{\"name\": \"mce\"}, {\"name\": \"cnv\"}], 1 } ' | jq '.id'", "{ <various cluster properties>, \"monitored_operators\": [ { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"console\", \"operator_type\": \"builtin\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"cvo\", \"operator_type\": \"builtin\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"mce\", \"namespace\": \"multicluster-engine\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"multicluster-engine\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"cnv\", \"namespace\": \"openshift-cnv\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"hco-operatorhub\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"lvm\", \"namespace\": \"openshift-local-storage\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"local-storage-operator\", \"timeout_seconds\": 4200 } ], <more cluster properties>" ]
https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2023/html/assisted_installer_for_openshift_container_platform/assembly_installing-operators
Chapter 8. Post-installation storage configuration
Chapter 8. Post-installation storage configuration After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including storage configuration. 8.1. Dynamic provisioning 8.1.1. About dynamic provisioning The StorageClass resource object describes and classifies storage that can be requested, as well as provides a means for passing parameters for dynamically provisioned storage on demand. StorageClass objects can also serve as a management mechanism for controlling different levels of storage and access to the storage. Cluster Administrators ( cluster-admin ) or Storage Administrators ( storage-admin ) define and create the StorageClass objects that users can request without needing any detailed knowledge about the underlying storage volume sources. The OpenShift Container Platform persistent volume framework enables this functionality and allows administrators to provision a cluster with persistent storage. The framework also gives users a way to request those resources without having any knowledge of the underlying infrastructure. Many storage types are available for use as persistent volumes in OpenShift Container Platform. While all of them can be statically provisioned by an administrator, some types of storage are created dynamically using the built-in provider and plugin APIs. 8.1.2. Available dynamic provisioning plugins OpenShift Container Platform provides the following provisioner plugins, which have generic implementations for dynamic provisioning that use the cluster's configured provider's API to create new storage resources: Storage type Provisioner plugin name Notes Red Hat OpenStack Platform (RHOSP) Cinder kubernetes.io/cinder RHOSP Manila Container Storage Interface (CSI) manila.csi.openstack.org Once installed, the OpenStack Manila CSI Driver Operator and ManilaDriver automatically create the required storage classes for all available Manila share types needed for dynamic provisioning. AWS Elastic Block Store (EBS) kubernetes.io/aws-ebs For dynamic provisioning when using multiple clusters in different zones, tag each node with Key=kubernetes.io/cluster/<cluster_name>,Value=<cluster_id> where <cluster_name> and <cluster_id> are unique per cluster. Azure Disk kubernetes.io/azure-disk Azure File kubernetes.io/azure-file The persistent-volume-binder service account requires permissions to create and get secrets to store the Azure storage account and keys. GCE Persistent Disk (gcePD) kubernetes.io/gce-pd In multi-zone configurations, it is advisable to run one OpenShift Container Platform cluster per GCE project to avoid PVs from being created in zones where no node in the current cluster exists. VMware vSphere kubernetes.io/vsphere-volume Important Any chosen provisioner plugin also requires configuration for the relevant cloud, host, or third-party provider as per the relevant documentation. 8.2. Defining a storage class StorageClass objects are currently a globally scoped object and must be created by cluster-admin or storage-admin users. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The following sections describe the basic definition for a StorageClass object and specific examples for each of the supported plugin types. 8.2.1. Basic StorageClass object definition The following resource shows the parameters and default values that you use to configure a storage class. This example uses the AWS ElasticBlockStore (EBS) object definition. Sample StorageClass definition kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' ... provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2 ... 1 (required) The API object type. 2 (required) The current apiVersion. 3 (required) The name of the storage class. 4 (optional) Annotations for the storage class. 5 (required) The type of provisioner associated with this storage class. 6 (optional) The parameters required for the specific provisioner, this will change from plugin to plugin. 8.2.2. Storage class annotations To set a storage class as the cluster-wide default, add the following annotation to your storage class metadata: storageclass.kubernetes.io/is-default-class: "true" For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" ... This enables any persistent volume claim (PVC) that does not specify a specific storage class to automatically be provisioned through the default storage class. However, your cluster can have more than one storage class, but only one of them can be the default storage class. Note The beta annotation storageclass.beta.kubernetes.io/is-default-class is still working; however, it will be removed in a future release. To set a storage class description, add the following annotation to your storage class metadata: kubernetes.io/description: My Storage Class Description For example: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description ... 8.2.3. RHOSP Cinder object definition cinder-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Volume type created in Cinder. Default is empty. 3 Availability Zone. If not specified, volumes are generally round-robined across all active zones where the OpenShift Container Platform cluster has a node. 4 File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 8.2.4. AWS Elastic Block Store (EBS) object definition aws-ebs-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: "10" 3 encrypted: "true" 4 kmsKeyId: keyvalue 5 fsType: ext4 6 1 (required) Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 (required) Select from io1 , gp2 , sc1 , st1 . The default is gp2 . See the AWS documentation for valid Amazon Resource Name (ARN) values. 3 Optional: Only for io1 volumes. I/O operations per second per GiB. The AWS volume plugin multiplies this with the size of the requested volume to compute IOPS of the volume. The value cap is 20,000 IOPS, which is the maximum supported by AWS. See the AWS documentation for further details. 4 Optional: Denotes whether to encrypt the EBS volume. Valid values are true or false . 5 Optional: The full ARN of the key to use when encrypting the volume. If none is supplied, but encypted is set to true , then AWS generates a key. See the AWS documentation for a valid ARN value. 6 Optional: File system that is created on dynamically provisioned volumes. This value is copied to the fsType field of dynamically provisioned persistent volumes and the file system is created when the volume is mounted for the first time. The default value is ext4 . 8.2.5. Azure Disk object definition azure-advanced-disk-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Using WaitForFirstConsumer is strongly recommended. This provisions the volume while allowing enough storage to schedule the pod on a free worker node from an available zone. 3 Possible values are Shared (default), Managed , and Dedicated . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. 4 Azure storage account SKU tier. Default is empty. Note that Premium VMs can attach both Standard_LRS and Premium_LRS disks, Standard VMs can only attach Standard_LRS disks, Managed VMs can only attach managed disks, and unmanaged VMs can only attach unmanaged disks. If kind is set to Shared , Azure creates all unmanaged disks in a few shared storage accounts in the same resource group as the cluster. If kind is set to Managed , Azure creates new managed disks. If kind is set to Dedicated and a storageAccount is specified, Azure uses the specified storage account for the new unmanaged disk in the same resource group as the cluster. For this to work: The specified storage account must be in the same region. Azure Cloud Provider must have write access to the storage account. If kind is set to Dedicated and a storageAccount is not specified, Azure creates a new dedicated storage account for the new unmanaged disk in the same resource group as the cluster. 8.2.6. Azure File object definition The Azure File storage class uses secrets to store the Azure storage account name and the storage account key that are required to create an Azure Files share. These permissions are created as part of the following procedure. Procedure Define a ClusterRole object that allows access to create and view secrets: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create'] 1 The name of the cluster role to view and create secrets. Add the cluster role to the service account: USD oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder Create the Azure File StorageClass object: kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Location of the Azure storage account, such as eastus . Default is empty, meaning that a new Azure storage account will be created in the OpenShift Container Platform cluster's location. 3 SKU tier of the Azure storage account, such as Standard_LRS . Default is empty, meaning that a new Azure storage account will be created with the Standard_LRS SKU. 4 Name of the Azure storage account. If a storage account is provided, then skuName and location are ignored. If no storage account is provided, then the storage class searches for any storage account that is associated with the resource group for any accounts that match the defined skuName and location . 8.2.6.1. Considerations when using Azure File The following file system features are not supported by the default Azure File storage class: Symlinks Hard links Extended attributes Sparse files Named pipes Additionally, the owner user identifier (UID) of the Azure File mounted directory is different from the process UID of the container. The uid mount option can be specified in the StorageClass object to define a specific user identifier to use for the mounted directory. The following StorageClass object demonstrates modifying the user and group identifier, along with enabling symlinks for the mounted directory. kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate 1 Specifies the user identifier to use for the mounted directory. 2 Specifies the group identifier to use for the mounted directory. 3 Enables symlinks. 8.2.7. GCE PersistentDisk (gcePD) object definition gce-pd-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 Select either pd-standard or pd-ssd . The default is pd-standard . 8.2.8. VMware vSphere object definition vsphere-storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/vsphere-volume 2 parameters: diskformat: thin 3 1 Name of the storage class. The persistent volume claim uses this storage class for provisioning the associated persistent volumes. 2 For more information about using VMware vSphere with OpenShift Container Platform, see the VMware vSphere documentation . 3 diskformat : thin , zeroedthick and eagerzeroedthick are all valid disk formats. See vSphere docs for additional details regarding the disk format types. The default value is thin . 8.2.9. Red Hat Virtualization (RHV) object definition OpenShift Container Platform creates a default object of type StorageClass named ovirt-csi-sc which is used for creating dynamically provisioned persistent volumes. To create additional storage classes for different configurations, create and save a file with the StorageClass object described by the following sample YAML: ovirt-storageclass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage_class_name> 1 annotations: storageclass.kubernetes.io/is-default-class: "<boolean>" 2 provisioner: csi.ovirt.org allowVolumeExpansion: <boolean> 3 reclaimPolicy: Delete 4 volumeBindingMode: Immediate 5 parameters: storageDomainName: <rhv-storage-domain-name> 6 thinProvisioning: "<boolean>" 7 csi.storage.k8s.io/fstype: <file_system_type> 8 1 Name of the storage class. 2 Set to false if the storage class is the default storage class in the cluster. If set to true , the existing default storage class must be edited and set to false . 3 true enables dynamic volume expansion, false prevents it. true is recommended. 4 Dynamically provisioned persistent volumes of this storage class are created with this reclaim policy. This default policy is Delete . 5 Indicates how to provision and bind PersistentVolumeClaims . When not set, VolumeBindingImmediate is used. This field is only applied by servers that enable the VolumeScheduling feature. 6 The RHV storage domain name to use. 7 If true , the disk is thin provisioned. If false , the disk is preallocated. Thin provisioning is recommended. 8 Optional: File system type to be created. Possible values: ext4 (default) or xfs . 8.3. Changing the default storage class Use the following process to change the default storage class. For example you have two defined storage classes, gp2 and standard , and you want to change the default storage class from gp2 to standard . List the storage class: USD oc get storageclass Example output NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs 1 (default) denotes the default storage class. Change the value of the storageclass.kubernetes.io/is-default-class annotation to false for the default storage class: USD oc patch storageclass gp2 -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "false"}}}' Make another storage class the default by setting the storageclass.kubernetes.io/is-default-class annotation to true : USD oc patch storageclass standard -p '{"metadata": {"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}' Verify the changes: USD oc get storageclass Example output NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs 8.4. Optimizing storage Optimizing storage helps to minimize storage use across all resources. By optimizing storage, administrators help ensure that existing storage resources are working in an efficient manner. 8.5. Available persistent storage options Understand your persistent storage options so that you can optimize your OpenShift Container Platform environment. Table 8.1. Available storage options Storage type Description Examples Block Presented to the operating system (OS) as a block device Suitable for applications that need full control of storage and operate at a low level on files bypassing the file system Also referred to as a Storage Area Network (SAN) Non-shareable, which means that only one client at a time can mount an endpoint of this type AWS EBS and VMware vSphere support dynamic persistent volume (PV) provisioning natively in OpenShift Container Platform. File Presented to the OS as a file system export to be mounted Also referred to as Network Attached Storage (NAS) Concurrency, latency, file locking mechanisms, and other capabilities vary widely between protocols, implementations, vendors, and scales. RHEL NFS, NetApp NFS [1] , and Vendor NFS Object Accessible through a REST API endpoint Configurable for use in the OpenShift image registry Applications must build their drivers into the application and/or container. AWS S3 NetApp NFS supports dynamic PV provisioning when using the Trident plugin. Important Currently, CNS is not supported in OpenShift Container Platform 4.10. 8.6. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 8.2. Recommended and configurable storage technology Storage type ROX 1 RWX 2 Registry Scaled registry Metrics 3 Logging Apps 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, using any shared storage would be an anti-pattern. One volume per elasticsearch is required. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. Block Yes 4 No Configurable Not configurable Recommended Recommended Recommended File Yes 4 Yes Configurable Configurable Configurable 5 Configurable 6 Recommended Object Yes Yes Recommended Recommended Not configurable Not configurable Not configurable 7 Note A scaled registry is an OpenShift image registry where two or more pod replicas are running. 8.6.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 8.6.1.1. Registry In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift image registry cluster deployment with production workloads. 8.6.1.2. Scaled registry In a scaled/HA OpenShift image registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. 8.6.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 8.6.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. 8.6.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 8.6.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . Additional resources Recommended etcd practices 8.7. Deploy Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. If you are looking for Red Hat OpenShift Data Foundation information about... See the following Red Hat OpenShift Data Foundation documentation: What's new, known issues, notable bug fixes, and Technology Previews OpenShift Data Foundation 4.9 Release Notes Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations Planning your OpenShift Data Foundation 4.9 deployment Instructions on deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster Deploying OpenShift Data Foundation 4.9 in external mode Instructions on deploying OpenShift Data Foundation to local storage on bare metal infrastructure Deploying OpenShift Data Foundation 4.9 using bare metal infrastructure Instructions on deploying OpenShift Data Foundation on Red Hat OpenShift Container Platform VMware vSphere clusters Deploying OpenShift Data Foundation 4.9 on VMware vSphere Instructions on deploying OpenShift Data Foundation using Amazon Web Services for local or cloud storage Deploying OpenShift Data Foundation 4.9 using Amazon Web Services Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters Deploying and managing OpenShift Data Foundation 4.9 using Google Cloud Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Azure clusters Deploying and managing OpenShift Data Foundation 4.9 using Microsoft Azure Instructions on deploying OpenShift Data Foundation to use local storage on IBM Power infrastructure Deploying OpenShift Data Foundation on IBM Power Instructions on deploying OpenShift Data Foundation to use local storage on IBM Z infrastructure Deploying OpenShift Data Foundation on IBM Z infrastructure Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone Managing and allocating resources Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa) Managing hybrid and multicloud resources Safely replacing storage devices for Red Hat OpenShift Data Foundation Replacing devices Safely replacing a node in a Red Hat OpenShift Data Foundation cluster Replacing nodes Scaling operations in Red Hat OpenShift Data Foundation Scaling storage Monitoring a Red Hat OpenShift Data Foundation 4.9 cluster Monitoring Red Hat OpenShift Data Foundation 4.9 Resolve issues encountered during operations Troubleshooting OpenShift Data Foundation 4.9 Migrating your OpenShift Container Platform cluster from version 3 to version 4 Migration
[ "kind: StorageClass 1 apiVersion: storage.k8s.io/v1 2 metadata: name: <storage-class-name> 3 annotations: 4 storageclass.kubernetes.io/is-default-class: 'true' provisioner: kubernetes.io/aws-ebs 5 parameters: 6 type: gp2", "storageclass.kubernetes.io/is-default-class: \"true\"", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: \"true\"", "kubernetes.io/description: My Storage Class Description", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubernetes.io/description: My Storage Class Description", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/cinder parameters: type: fast 2 availability: nova 3 fsType: ext4 4", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/aws-ebs parameters: type: io1 2 iopsPerGB: \"10\" 3 encrypted: \"true\" 4 kmsKeyId: keyvalue 5 fsType: ext4 6", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/azure-disk volumeBindingMode: WaitForFirstConsumer 2 allowVolumeExpansion: true parameters: kind: Managed 3 storageaccounttype: Premium_LRS 4 reclaimPolicy: Delete", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: system:azure-cloud-provider name: <persistent-volume-binder-role> 1 rules: - apiGroups: [''] resources: ['secrets'] verbs: ['get','create']", "oc adm policy add-cluster-role-to-user <persistent-volume-binder-role> system:serviceaccount:kube-system:persistent-volume-binder", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <azure-file> 1 provisioner: kubernetes.io/azure-file parameters: location: eastus 2 skuName: Standard_LRS 3 storageAccount: <storage-account> 4 reclaimPolicy: Delete volumeBindingMode: Immediate", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: azure-file mountOptions: - uid=1500 1 - gid=1500 2 - mfsymlinks 3 provisioner: kubernetes.io/azure-file parameters: location: eastus skuName: Standard_LRS reclaimPolicy: Delete volumeBindingMode: Immediate", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/gce-pd parameters: type: pd-standard 2 replication-type: none volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true reclaimPolicy: Delete", "kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storage-class-name> 1 provisioner: kubernetes.io/vsphere-volume 2 parameters: diskformat: thin 3", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage_class_name> 1 annotations: storageclass.kubernetes.io/is-default-class: \"<boolean>\" 2 provisioner: csi.ovirt.org allowVolumeExpansion: <boolean> 3 reclaimPolicy: Delete 4 volumeBindingMode: Immediate 5 parameters: storageDomainName: <rhv-storage-domain-name> 6 thinProvisioning: \"<boolean>\" 7 csi.storage.k8s.io/fstype: <file_system_type> 8", "oc get storageclass", "NAME TYPE gp2 (default) kubernetes.io/aws-ebs 1 standard kubernetes.io/aws-ebs", "oc patch storageclass gp2 -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"false\"}}}'", "oc patch storageclass standard -p '{\"metadata\": {\"annotations\": {\"storageclass.kubernetes.io/is-default-class\": \"true\"}}}'", "oc get storageclass", "NAME TYPE gp2 kubernetes.io/aws-ebs standard (default) kubernetes.io/aws-ebs" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/post-installation_configuration/post-install-storage-configuration
10.3. GDB
10.3. GDB In Red Hat Enterprise Linux 7, the GDB debugger is based on the gdb-7.6.1 release, and includes numerous enhancements and bug fixes relative to the Red Hat Enterprise Linux 6 equivalent. This version corresponds to GDB in Red Hat Developer Toolset 2.1; a detailed comparison of Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 GDB versions can be seen here: https://access.redhat.com/site/documentation/en-US/Red_Hat_Developer_Toolset/2/html/User_Guide/index.html Notable new features of GDB included in Red Hat Enterprise Linux 7 are the following: Faster loading of symbols using the new .gdb_index section and the new gdb-add-index shell command. Note that this feature is already present in Red Hat Enterprise Linux 6.1 and later; gdbserver now supports standard input/output (STDIO) connections, for example: (gdb) target remote | ssh myhost gdbserver - hello ; Improved behavior of the watch command using the -location parameter; Virtual method tables can be displayed by a new command, info vtbl ; Control of automatic loading of files by new commands info auto-load , set auto-load , and show auto-load ; Displaying absolute path to source file names using the set filename-display absolute command; Control flow recording with hardware support by a new command, record btrace . Notable bug fixes in GDB included in Red Hat Enterprise Linux 7 are the following: The info proc command has been updated to work on core files; Breakpoints are now set on all matching locations in all inferiors; The file name part of breakpoint location now matches trailing components of a source file name; Breakpoints can now be put on inline functions; Parameters of the template are now put in scope when the template is instantiated. In addition, Red Hat Enterprise Linux 7 provides a new package, gdb-doc , which contains the GDB Manual in PDF, HTML, and info formats. The GDB Manual was part of the main RPM package in versions of Red Hat Enterprise Linux.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.0_release_notes/sect-red_hat_enterprise_linux-7.0_release_notes-compiler_and_tools-gdb
2.4. Starting NetworkManager
2.4. Starting NetworkManager To start NetworkManager : To enable NetworkManager automatically at boot time: For more information on starting, stopping and managing services, see the Red Hat Enterprise Linux System Administrator's Guide .
[ "~]# systemctl start NetworkManager", "~]# systemctl enable NetworkManager" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-starting_networkmanager
5.3. Removed Features
5.3. Removed Features The following table describes features that have been removed in this version of Red Hat Virtualization. Table 5.3. Removed Features Removed Feature Details Metrics Store Metrics Store support has been removed in Red Hat Virtualization 4.4. Administrators can use the Data Warehouse with Grafana dashboards (deployed by default with Red Hat Virtualization 4.4) to view metrics and inventory reports. See Grafana.com for information on Grafana. Administrators can also send metrics and logs to a standalone Elasticsearch instance. See Deprecation of RHV Metrics Store and Alternative Solutions Version 3 REST API Version 3 of the REST API is no longer supported. Use the version 4 REST API . Version 3 SDKs Version 3 of the SDKs for Java, Python, and Ruby are no longer supported. Use the version 4 SDK for Java , Python , or Ruby . RHEVM Shell Red Hat Virtualization's specialized command line interface is no longer supported. Use the version 4 SDK for Java , Python , or Ruby , or the version 4 REST API . Iptables Use the firewalld service . Note iptables is only supported on Red Hat Enterprise Linux 7 hosts, in clusters with compatibility version 4.2 or 4.3. You can only add Red Hat Enterprise Linux 8 hosts to clusters with firewall type firewalld . Conroe, Penryn, Opteron G1, Opteron G2, and Opteron G3 CPU types Use newer CPU types . IBRS CPU types Use newer fixes . 3.6, 4.0 and 4.1 cluster compatibility versions Use a newer cluster compatibility version. Upgrade the compatibility version of existing clusters. cockpit-machines-ovirt The cockpit-machines-ovirt package is not included in Red Hat Enterprise Linux 8 and is not supported in Red Hat Virtualization Host 4.4. Use the Administration Portal. ovirt-guest-tools ovirt-guest-tools has been replaced with a new WiX-based installer, included in Virtio-Win. You can download the ISO file containing the Windows guest drivers, agents, and installers from latest virtio-win downloads OpenStack Neutron deployment The Red Hat Virtualization 4.4.0 release removes OpenStack Neutron deployment, including the automatic deployment of the Neutron agents through the Network Provider tab in the New Host window and the AgentConfiguration in the REST-API. Use the following components instead: - To deploy OpenStack hosts, use the OpenStack Platform Director/TripleO . - The Open vSwitch interface mappings are already managed automatically by VDSM in clusters with switch type OVS . - To manage the deployment of ovirt-provider-ovn-driver on a cluster, update the cluster's "Default Network Provider" attribute. screen With this update to RHEL 8-based hosts, the screen package is removed. The current release installs the tmux package on RHEL 8-based hosts instead of screen . Application Provisioning Tool service (APT) With this release, the virtio-win installer replaces the APT service. ovirt-engine-api-explorer The ovirt-engine-api-explorer package has been deprecated and removed in Red Hat Virtualization Manager 4.4.3. Customers should use the official REST API Guide instead, which provides the same information as ovirt-engine-api-explorer. See REST API Guide . DPDK (Data Plane Development Kit) Experimental support for DPDK has been removed in Red Hat Virtualization 4.4.4. VDSM hooks Starting with Red Hat Virtualization 4.4.7, VDSM hooks are not installed by default. You can manually install VDSM hooks as needed. Foreman integration Provisioning hosts using Foreman, which is initiated from Red Hat Virtualization Manager, is removed in Red Hat Virtualization 4.4.7. Removing this neither affects the ability to manage Red Hat Virtualization virtual machines from Satellite nor the ability for Red Hat Virtualization to work with errata from Satellite for hosts and virtual machines. Cockpit installation for Self-hosted engine Using Cockpit to install the self-hosted engine is no longer supported. Use the command line installation. oVirt Scheduler Proxy The ovirt-scheduler-proxy package is removed in Red Hat Virtualization 4.4 SP1. Ruby software development kit (SDK) The Ruby SDK is no longer supported. systemtap The systemtap package is no longer supported on RHVH 4.4. Red Hat Virtualization Manager (RHVM) appliance With this release, the Red Hat Virtualization Manager (RHVM) appliance is being retired. Following this release, you can update the RHVM by running the dnf update command followed by engine-setup after connecting to the Content Delivery Network. DISA STIG for Red Hat Virtualization Host (RHVH) The DISA STIG security profile is no longer supported for RHVH. Use the RHEL host with a DISA STIG profile instead.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/removed_features_rhv
8.254. virt-viewer
8.254. virt-viewer 8.254.1. RHBA-2014:1379 - virt-viewer bug fix update Updated virt-viewer packages that fix numerous bugs are now available for Red Hat Enterprise Linux 6. The virt-viewer packages provide the Virtual Machine Viewer, which is a lightweight interface for interacting with the graphical display of a virtualized guest. Virtual Machine Viewer uses libvirt and is intended as a replacement for traditional VNC or SPICE clients. The Simple Protocol for Independent Computing Environments (SPICE) is a remote display protocol designed for virtual environments. Bug Fixes BZ# 1056041 Prior to this update, SPICE incorrectly determined the scaling of windows by using the original desktop size instead of the host screen size. As a consequence, when a guest window was open in SPICE , the screen could, under certain circumstances, become blurred. With this update, the guest window scaling has been fixed and this problem no longer occurs. BZ# 1083203 Prior to this update, when a virt-viewer console was launched from the Red Hat Enterprise Virtualization user portal with the Native Client invocation method and Open in Full Screen was selected, the displays of the guest virtual machine were not always configured to match the client displays. With this update, virt-viewer correctly shows a full-screen guest display for each client monitor. BZ# 809546 Previously, when virt-viewer was opened in fullscreen mode on a client machine with two or more monitors, it opened a fullscreen guest display for each monitor, but sometimes placed more than one display on the same client monitor. With this update, the bug has been fixed and each fullscreen guest display is now placed on its own client monitor. BZ# 1002156 , BZ# 1018180 When configuring and aligning multiple guest displays, the display setting sometimes used outdated information about the position of the virt-viewer and remote-viewer windows. This caused overlapping in the guest displays, and different client windows showed some of the same content. In addition, the content of the guest displays in some cases swapped completely when a guest display window was resized. With this update, only the current window location is used to align and configure displays. As a result, the overlaps of content and the swapping no longer occur. BZ# 1099295 Under some circumstances, the system USB channels are created after the display channel. This sometimes caused redirecting a USB device to a guest machine to fail, which in turn caused the USB device selection menu in the virt-viewer client interface to be unusable. With this update, redirecting a USB device works regardless of the order in which the USB channels and the display channels are created. As a result, USB device selection no longer becomes unusable in the described scenario. BZ# 1096717 Due to a bug in the fullscreen configuration of virt-viewer, the guest resolution was set incorrectly after leaving and re-entering fullscreen mode when virt-viewer was launched with the --full screen=auto-conf option. This update fixes the bug and screen resolution is now always adjusted properly when leaving and re-entering fullscreen mode. BZ# 1029108 Assigning only modifier keys (such as Ctrl or Alt ) as the key combination to the --hotkeys option in virt-viewer is not possible. When such a combination is set, virt-viewer automatically reverts the option to its default value. However, the release-cursor function previously did not revert correctly. As a consequence, when a modifier-only hotkey was set for release-cursor , the cursor did not release in the guest window. With this update, release-cursor reverts correctly when the user attempts to register a modifier-only hotkey, and releasing the cursor in the guest window works as expected. BZ# 1024199 Due to a bug in remote-viewer , typing a URI in the remote-viewer GUI tool with additional space characters before or after the address previously caused the guest connection to fail. This update fixes the bug and adding spaces before or after the URI no longer prevents remote-viewer from connecting to a guest. BZ# 1009513 Prior to this update, when connected to a server with the --fullscreen=auto-conf option, leaving fullscreen mode of a guest display and opening another guest display caused the second guest display to open in fullscreen mode rather than in the windowed mode. This update fixes the problem and the second guest display will now correctly open in the windowed mode in the described circumstances. BZ# 1063238 Due to incorrect association of the SPICE client with the Multipurpose Internet Mail Extension (MIME) of the console.vv file, console.vv was previously opened in a text editor instead of launching a remote desktop session in remote-viewer . With this update, the erroneous MIME association has been fixed and the remote desktop session launches correctly. BZ# 1007649 Prior to this update, the virt-veiwer interface offered the Automatically resize option. However, the availability of the automatic resize function in virt-viewer is dependent on the protocol and guest used. Therefore, Automatically resize in some cases did not work. Now, automatic guest resizing will only be enabled when the required conditions are met. BZ# 1004051 Due to rounding errors in the client display size calculation, zooming in or out on a window in virt-viewer or remote-viewer sometimes incorrectly resized the guest display. With this update, the errors have been fixed and zooming now correctly causes the guest display to be scaled up or down rather than resized. Users of virt-viewer are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/virt-viewer
Chapter 70. ClientTls schema reference
Chapter 70. ClientTls schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of ClientTls schema properties Configures TLS trusted certificates for connecting KafkaConnect, KafkaBridge, KafkaMirror, KafkaMirrorMaker2 to the cluster. 70.1. trustedCertificates Provide a list of secrets using the trustedCertificates property . 70.2. ClientTls schema properties Property Description trustedCertificates Trusted certificates for TLS connection. CertSecretSource array
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-clienttls-reference
4.228. pidgin
4.228. pidgin 4.228.1. RHSA-2011:1821 - Moderate: pidgin security update Updated pidgin packages that fix multiple security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Pidgin is an instant messaging program which can log in to multiple accounts on multiple instant messaging networks simultaneously. Security Fixes CVE-2011-4601 An input sanitization flaw was found in the way the AOL Open System for Communication in Realtime (OSCAR) protocol plug-in in Pidgin, used by the AOL ICQ and AIM instant messaging systems, escaped certain UTF-8 characters. A remote attacker could use this flaw to crash Pidgin via a specially-crafted OSCAR message. CVE-2011-4602 Multiple NULL pointer dereference flaws were found in the Jingle extension of the Extensible Messaging and Presence Protocol (XMPP) protocol plug-in in Pidgin. A remote attacker could use these flaws to crash Pidgin via a specially-crafted Jingle multimedia message. Red Hat would like to thank the Pidgin project for reporting these issues. Upstream acknowledges Evgeny Boger as the original reporter of CVE-2011-4601 , and Thijs Alkemade as the original reporter of CVE-2011-4602 . All Pidgin users should upgrade to these updated packages, which contain backported patches to resolve these issues. Pidgin must be restarted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/pidgin
Troubleshooting issues
Troubleshooting issues Red Hat OpenShift GitOps 1.15 Troubleshooting topics for OpenShift GitOps and your cluster Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/troubleshooting_issues/index
Chapter 5. Setting up client access to the Kafka cluster
Chapter 5. Setting up client access to the Kafka cluster After you have deployed AMQ Streams , the procedures in this section explain how to: Deploy example producer and consumer clients, which you can use to verify your deployment Set up external client access to the Kafka cluster The steps to set up access to the Kafka cluster for a client outside OpenShift are more complex, and require familiarity with the Kafka component configuration procedures described in the Using AMQ Streams on OpenShift guide. 5.1. Deploying example clients This procedure shows how to deploy example producer and consumer clients that use the Kafka cluster you created to send and receive messages. Prerequisites The Kafka cluster is available for the clients. Procedure Deploy a Kafka producer. oc run kafka-producer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list cluster-name -kafka-bootstrap:9092 --topic my-topic Type a message into the console where the producer is running. Press Enter to send the message. Deploy a Kafka consumer. oc run kafka-consumer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server cluster-name -kafka-bootstrap:9092 --topic my-topic --from-beginning Confirm that you see the incoming messages in the consumer console. 5.2. Setting up access for clients outside of OpenShift This procedure shows how to configure client access to a Kafka cluster from outside OpenShift. Using the address of the Kafka cluster, you can provide external access to a client on a different OpenShift namespace or outside OpenShift entirely. You configure an external Kafka listener to provide the access. The following external listener types are supported: route to use OpenShift Route and the default HAProxy router loadbalancer to use loadbalancer services nodeport to use ports on OpenShift nodes ingress to use OpenShift Ingress and the NGINX Ingress Controller for Kubernetes The type chosen depends on your requirements, and your environment and infrastructure. For example, loadbalancers might not be suitable for certain infrastructure, such as bare metal, where node ports provide a better option. In this procedure: An external listener is configured for the Kafka cluster, with TLS encryption and authentication, and Kafka simple authorization is enabled. A KafkaUser is created for the client, with TLS authentication and Access Control Lists (ACLs) defined for simple authorization . You can configure your listener to use TLS or SCRAM-SHA-512 authentication, both of which can be used with TLS encryption. If you are using an authorization server, you can use token-based OAuth 2.0 authentication and OAuth 2.0 authorization . Open Policy Agent (OPA) authorization is also supported as a Kafka authorization option. When you configure the KafkaUser authentication and authorization mechanisms, ensure they match the equivalent Kafka configuration: KafkaUser.spec.authentication matches Kafka.spec.kafka.listeners[*].authentication KafkaUser.spec.authorization matches Kafka.spec.kafka.authorization You should have at least one listener supporting the authentication you want to use for the KafkaUser . Note Authentication between Kafka users and Kafka brokers depends on the authentication settings for each. For example, it is not possible to authenticate a user with TLS if it is not also enabled in the Kafka configuration. AMQ Streams operators automate the configuration process: The Cluster Operator creates the listeners and sets up the cluster and client certificate authority (CA) certificates to enable authentication within the Kafka cluster. The User Operator creates the user representing the client and the security credentials used for client authentication, based on the chosen authentication type. In this procedure, the certificates generated by the Cluster Operator are used, but you can replace them by installing your own certificates . You can also configure your listener to use a Kafka listener certificate managed by an external Certificate Authority . Certificates are available in PKCS #12 format (.p12) and PEM (.crt) formats. Prerequisites The Kafka cluster is available for the client The Cluster Operator and User Operator are running in the cluster A client outside the OpenShift cluster to connect to the Kafka cluster Procedure Configure the Kafka cluster with an external Kafka listener. Define the authentication required to access the Kafka broker through the listener Enable authorization on the Kafka broker For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... listeners: 1 - name: external 2 port: 9094 3 type: LISTENER-TYPE 4 tls: true 5 authentication: type: tls 6 configuration: preferredNodePortAddressType: InternalDNS 7 bootstrap and broker service overrides 8 #... authorization: 9 type: simple superUsers: - super-user-name 10 # ... 1 Configuration options for enabling external listeners are described in the Generic Kafka listener schema reference . 2 Name to identify the listener. Must be unique within the Kafka cluster. 3 Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. 4 External listener type specified as route , loadbalancer , nodeport or ingress . An internal listener is specified as internal . 5 Enables TLS encryption on the listener. Default is false . TLS encryption is not required for route listeners. 6 Authentication specified as tls . 7 (Optional, for nodeport listeners only) Configuration to specify a preference for the first address type used by AMQ Streams as the node address . 8 (Optional) AMQ Streams automatically determines the addresses to advertise to clients. The addresses are automatically assigned by OpenShift. You can override bootstrap and broker service addresses if the infrastructure on which you are running AMQ Streams does not provide the right address. Validation is not performed on the overrides. The override configuration differs according to the listener type. For example, you can override hosts for route , DNS names or IP addresses for loadbalancer , and node ports for nodeport . 9 Authoization specified as simple , which uses the AclAuthorizer Kafka plugin. 10 (Optional) Super users can access all brokers regardless of any access restrictions defined in ACLs. Warning An OpenShift Route address comprises the name of the Kafka cluster, the name of the listener, and the name of the namespace it is created in. For example, my-cluster-kafka-listener1-bootstrap-myproject ( CLUSTER-NAME -kafka- LISTENER-NAME -bootstrap- NAMESPACE ). If you are using a route listener type, be careful that the whole length of the address does not exceed a maximum limit of 63 characters. Create or update the Kafka resource. oc apply -f KAFKA-CONFIG-FILE The Kafka cluster is configured with a Kafka broker listener using TLS authentication. A service is created for each Kafka broker pod. A service is created to serve as the bootstrap address for connection to the Kafka cluster. A service is also created as the external bootstrap address for external connection to the Kafka cluster using nodeport listeners. The cluster CA certificate to verify the identity of the kafka brokers is also created with the same name as the Kafka resource. Find the bootstrap address and port from the status of the Kafka resource. oc get kafka KAFKA-CLUSTER-NAME -o jsonpath='{.status.listeners[?(@.type=="external")].bootstrapServers}' Use the bootstrap address in your Kafka client to connect to the Kafka cluster. Extract the public cluster CA certificate and password from the generated KAFKA-CLUSTER-NAME -cluster-ca-cert Secret. oc get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\.p12}' | base64 -d > ca.p12 oc get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\.password}' | base64 -d > ca.password Use the certificate and password in your Kafka client to connect to the Kafka cluster with TLS encryption. Note Cluster CA certificates renew automatically by default. If you are using your own Kafka listener certificates, you will need to renew the certificates manually . Create or modify a user representing the client that requires access to the Kafka cluster. Specify the same authentication type as the Kafka listener. Specify the authorization ACLs for simple authorization. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster 1 spec: authentication: type: tls 2 authorization: type: simple acls: 3 - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: literal operation: Read 1 The label must match the label of the Kafka cluster for the user to be created. 2 Authentication specified as tls . 3 Simple authorization requires an accompanying list of ACL rules to apply to the user. The rules define the operations allowed on Kafka resources based on the username ( my-user ). Create or modify the KafkaUser resource. oc apply -f USER-CONFIG-FILE The user is created, as well as a Secret with the same name as the KafkaUser resource. The Secret contains a private and public key for TLS client authentication. For example: apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: PUBLIC-KEY-OF-THE-CLIENT-CA user.crt: USER-CERTIFICATE-CONTAINING-PUBLIC-KEY-OF-USER user.key: PRIVATE-KEY-OF-USER user.p12: P12-ARCHIVE-FILE-STORING-CERTIFICATES-AND-KEYS user.password: PASSWORD-PROTECTING-P12-ARCHIVE Configure your client to connect to the Kafka cluster with the properties required to make a secure connection to the Kafka cluster. Add the authentication details for the public cluster certificates: security.protocol: SSL 1 ssl.truststore.location: PATH-TO/ssl/keys/truststore 2 ssl.truststore.password: CLUSTER-CA-CERT-PASSWORD 3 ssl.truststore.type=PKCS12 4 1 Enables TLS encryption (with or without TLS client authentication). 2 Specifies the truststore location where the certificates were imported. 3 Specifies the password for accessing the truststore. This property can be omitted if it is not needed by the truststore. 4 Identifies the truststore type. Note Use security.protocol: SASL_SSL when using SCRAM-SHA authentication over TLS. Add the bootstrap address and port for connecting to the Kafka cluster: bootstrap.servers: BOOTSTRAP-ADDRESS:PORT Add the authentication details for the public user certificates: ssl.keystore.location: PATH-TO/ssl/keys/user1.keystore 1 ssl.keystore.password: USER-CERT-PASSWORD 2 1 Specifies the keystore location where the certificates were imported. 2 Specifies the password for accessing the keystore. This property can be omitted if it is not needed by the keystore. The public user certificate is signed by the client CA when it is created.
[ "run kafka-producer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list cluster-name -kafka-bootstrap:9092 --topic my-topic", "run kafka-consumer -ti --image=registry.redhat.io/amq7/amq-streams-kafka-26-rhel7:1.6.7 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server cluster-name -kafka-bootstrap:9092 --topic my-topic --from-beginning", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # listeners: 1 - name: external 2 port: 9094 3 type: LISTENER-TYPE 4 tls: true 5 authentication: type: tls 6 configuration: preferredNodePortAddressType: InternalDNS 7 bootstrap and broker service overrides 8 # authorization: 9 type: simple superUsers: - super-user-name 10 #", "apply -f KAFKA-CONFIG-FILE", "get kafka KAFKA-CLUSTER-NAME -o jsonpath='{.status.listeners[?(@.type==\"external\")].bootstrapServers}'", "get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\\.p12}' | base64 -d > ca.p12", "get secret KAFKA-CLUSTER-NAME -cluster-ca-cert -o jsonpath='{.data.ca\\.password}' | base64 -d > ca.password", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster 1 spec: authentication: type: tls 2 authorization: type: simple acls: 3 - resource: type: topic name: my-topic patternType: literal operation: Read - resource: type: topic name: my-topic patternType: literal operation: Describe - resource: type: group name: my-group patternType: literal operation: Read", "apply -f USER-CONFIG-FILE", "apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: PUBLIC-KEY-OF-THE-CLIENT-CA user.crt: USER-CERTIFICATE-CONTAINING-PUBLIC-KEY-OF-USER user.key: PRIVATE-KEY-OF-USER user.p12: P12-ARCHIVE-FILE-STORING-CERTIFICATES-AND-KEYS user.password: PASSWORD-PROTECTING-P12-ARCHIVE", "security.protocol: SSL 1 ssl.truststore.location: PATH-TO/ssl/keys/truststore 2 ssl.truststore.password: CLUSTER-CA-CERT-PASSWORD 3 ssl.truststore.type=PKCS12 4", "bootstrap.servers: BOOTSTRAP-ADDRESS:PORT", "ssl.keystore.location: PATH-TO/ssl/keys/user1.keystore 1 ssl.keystore.password: USER-CERT-PASSWORD 2" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_and_upgrading_amq_streams_on_openshift/deploy-verify_str
Chapter 2. FIPS support
Chapter 2. FIPS support Federal Information Processing Standards (FIPS) are standards for computer security and interoperability. To use FIPS with Streams for Apache Kafka, you must have a FIPS-compliant OpenJDK (Open Java Development Kit) installed on your system. If your RHEL system is FIPS-enabled, OpenJDK automatically switches to FIPS mode when running Streams for Apache Kafka. This ensures that Streams for Apache Kafka uses the FIPS-compliant security libraries provided by OpenJDK. Minimum password length When running in the FIPS mode, SCRAM-SHA-512 passwords need to be at least 32 characters long. If you have a Kafka cluster with custom configuration that uses a password length that is less than 32 characters, you need to update your configuration. If you have any users with passwords shorter than 32 characters, you need to regenerate a password with the required length. Additional resources What are Federal Information Processing Standards (FIPS) 2.1. Installing Streams for Apache Kafka with FIPS mode enabled Enable FIPS mode before you install Streams for Apache Kafka on RHEL. Red Hat recommends installing RHEL with FIPS mode enabled, as opposed to enabling FIPS mode later. Enabling FIPS mode during the installation ensures that the system generates all keys with FIPS-approved algorithms and continuous monitoring tests in place. With RHEL running in FIPS mode, you must ensure that the Streams for Apache Kafka configuration is FIPS-compliant. Additionally, your Java implementation must also be FIPS-compliant. Note Running Streams for Apache Kafka on RHEL in FIPS mode requires a FIPS-compliant JDK. Procedure Install RHEL in FIPS mode. For further information, see the information on security hardening in the RHEL documentation . Proceed with the installation of Streams for Apache Kafka. Configure Streams for Apache Kafka to use FIPS-compliant algorithms and protocols. If used, ensure that the following configuration is compliant: SSL cipher suites and TLS versions must be supported by the JDK framework. SCRAM-SHA-512 passwords must be at least 32 characters long. Important Make sure that your installation environment and Streams for Apache Kafka configuration remains compliant as FIPS requirements change.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-fips-support-str
Recovering a Metro-DR stretch cluster
Recovering a Metro-DR stretch cluster Red Hat OpenShift Data Foundation 4.9 Instructions on how to recover applications and their storage from a metro disaster in Red Hat OpenShift Data Foundation. Red Hat Storage Documentation Team Abstract This document explains how to recover from a metro disaster in Red Hat OpenShift Data Foundation. Important Recovering a Metro-DR stretch cluster is a technology preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/recovering_a_metro-dr_stretch_cluster/index
Chapter 4. View OpenShift Data Foundation Topology
Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_on_any_platform/viewing-odf-topology_mcg-verify
Chapter 145. HDFS2 Component
Chapter 145. HDFS2 Component Available as of Camel version 2.14 The hdfs2 component enables you to read and write messages from/to an HDFS file system using Hadoop 2.x. HDFS is the distributed file system at the heart of Hadoop . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hdfs2</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 145.1. URI format hdfs2://hostname[:port][/path][?options] You can append query options to the URI in the following format, ?option=value&option=value&... The path is treated in the following way: as a consumer, if it's a file, it just reads the file, otherwise if it represents a directory it scans all the file under the path satisfying the configured pattern. All the files under that directory must be of the same type. as a producer, if at least one split strategy is defined, the path is considered a directory and under that directory the producer creates a different file per split named using the configured UuidGenerator. When consuming from hdfs2 then in normal mode, a file is split into chunks, producing a message per chunk. You can configure the size of the chunk using the chunkSize option. If you want to read from hdfs and write to a regular file using the file component, then you can use the fileMode=Append to append each of the chunks together. 145.2. Options The HDFS2 component supports 2 options, which are listed below. Name Description Default Type jAASConfiguration (common) To use the given configuration for security with JAAS. Configuration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The HDFS2 endpoint is configured using URI syntax: with the following path and query parameters: 145.2.1. Path Parameters (3 parameters): Name Description Default Type hostName Required HDFS host to use String port HDFS port to use 8020 int path Required The directory path to use String 145.2.2. Query Parameters (38 parameters): Name Description Default Type connectOnStartup (common) Whether to connect to the HDFS file system on starting the producer/consumer. If false then the connection is created on-demand. Notice that HDFS may take up till 15 minutes to establish a connection, as it has hardcoded 45 x 20 sec redelivery. By setting this option to false allows your application to startup, and not block for up till 15 minutes. true boolean fileSystemType (common) Set to LOCAL to not use HDFS but local java.io.File instead. HDFS HdfsFileSystemType fileType (common) The file type to use. For more details see Hadoop HDFS documentation about the various files types. NORMAL_FILE HdfsFileType keyType (common) The type for the key in case of sequence or map files. NULL WritableType owner (common) The file owner must match this owner for the consumer to pickup the file. Otherwise the file is skipped. String valueType (common) The type for the key in case of sequence or map files BYTES WritableType bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean pattern (consumer) The pattern used for scanning the directory * String sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern pollStrategy (consumer) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPoll Strategy append (producer) Append to existing file. Notice that not all HDFS file systems support the append option. false boolean overwrite (producer) Whether to overwrite existing files with the same name true boolean blockSize (advanced) The size of the HDFS blocks 67108864 long bufferSize (advanced) The buffer size used by HDFS 4096 int checkIdleInterval (advanced) How often (time in millis) in to run the idle checker background task. This option is only in use if the splitter strategy is IDLE. 500 int chunkSize (advanced) When reading a normal file, this is split into chunks producing a message per chunk. 4096 int compressionCodec (advanced) The compression codec to use DEFAULT HdfsCompressionCodec compressionType (advanced) The compression type to use (is default not in use) NONE CompressionType openedSuffix (advanced) When a file is opened for reading/writing the file is renamed with this suffix to avoid to read it during the writing phase. opened String readSuffix (advanced) Once the file has been read is renamed with this suffix to avoid to read it again. read String replication (advanced) The HDFS replication factor 3 short splitStrategy (advanced) In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So, for the moment, it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined, the hdfs path will be used as a directory and files will be created using the configured UuidGenerator. Every time a splitting condition is met, a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=ST:value,ST:value,... where ST can be: BYTES a new file is created, and the old is closed when the number of written bytes is more than value MESSAGES a new file is created, and the old is closed when the number of written messages is more than value IDLE a new file is created, and the old is closed when no writing happened in the last value milliseconds String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. You can also specify time values using units, such as 60s (60 seconds), 5m30s (5 minutes and 30 seconds), and 1h (1 hour). 1000 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutor Service scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz2 component none ScheduledPollConsumer Scheduler schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz2, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 145.3. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.hdfs2.enabled Enable hdfs2 component true Boolean camel.component.hdfs2.j-a-a-s-configuration To use the given configuration for security with JAAS. The option is a javax.security.auth.login.Configuration type. String camel.component.hdfs2.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 145.3.1. KeyType and ValueType NULL it means that the key or the value is absent BYTE for writing a byte, the java Byte class is mapped into a BYTE BYTES for writing a sequence of bytes. It maps the java ByteBuffer class INT for writing java integer FLOAT for writing java float LONG for writing java long DOUBLE for writing java double TEXT for writing java strings BYTES is also used with everything else, for example, in Camel a file is sent around as an InputStream, int this case is written in a sequence file or a map file as a sequence of bytes. 145.4. Splitting Strategy In the current version of Hadoop opening a file in append mode is disabled since it's not very reliable. So, for the moment, it's only possible to create new files. The Camel HDFS endpoint tries to solve this problem in this way: If the split strategy option has been defined, the hdfs path will be used as a directory and files will be created using the configured UuidGenerator Every time a splitting condition is met, a new file is created. The splitStrategy option is defined as a string with the following syntax: splitStrategy=<ST>:<value>,<ST>:<value>,* where <ST> can be: BYTES a new file is created, and the old is closed when the number of written bytes is more than <value> MESSAGES a new file is created, and the old is closed when the number of written messages is more than <value> IDLE a new file is created, and the old is closed when no writing happened in the last <value> milliseconds note that this strategy currently requires either setting an IDLE value or setting the HdfsConstants.HDFS_CLOSE header to false to use the BYTES/MESSAGES configuration... otherwise, the file will be closed with each message for example: hdfs2://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5 it means: a new file is created either when it has been idle for more than 1 second or if more than 5 bytes have been written. So, running hadoop fs -ls /tmp/simple-file you'll see that multiple files have been created. 145.5. Message Headers The following headers are supported by this component: 145.5.1. Producer only Header Description CamelFileName Camel 2.13: Specifies the name of the file to write (relative to the endpoint path). The name can be a String or an Expression object. Only relevant when not using a split strategy. 145.6. Controlling to close file stream When using the HDFS2 producer without a split strategy, then the file output stream is by default closed after the write. However you may want to keep the stream open, and only explicitly close the stream later. For that you can use the header HdfsConstants.HDFS_CLOSE (value = "CamelHdfsClose" ) to control this. Setting this value to a boolean allows you to explicit control whether the stream should be closed or not. Notice this does not apply if you use a split strategy, as there are various strategies that can control when the stream is closed. 145.7. Using this component in OSGi There are some quirks when running this component in an OSGi environment related to the mechanism Hadoop 2.x uses to discover different org.apache.hadoop.fs.FileSystem implementations. Hadoop 2.x uses java.util.ServiceLoader which looks for /META-INF/services/org.apache.hadoop.fs.FileSystem files defining available filesystem types and implementations. These resources are not available when running inside OSGi. As with camel-hdfs component, the default configuration files need to be visible from the bundle class loader. A typical way to deal with it is to keep a copy of core-default.xml (and e.g., hdfs-default.xml ) in your bundle root. 145.7.1. Using this component with manually defined routes There are two options: Package /META-INF/services/org.apache.hadoop.fs.FileSystem resource with bundle that defines the routes. This resource should list all the required Hadoop 2.x filesystem implementations. Provide boilerplate initialization code which populates internal, static cache inside org.apache.hadoop.fs.FileSystem class: org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration(); conf.setClass("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class, FileSystem.class); conf.setClass("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class, FileSystem.class); ... FileSystem.get("file:///", conf); FileSystem.get("hdfs://localhost:9000/", conf); ... 145.7.2. Using this component with Blueprint container Two options: Package /META-INF/services/org.apache.hadoop.fs.FileSystem resource with bundle that contains blueprint definition. Add the following to the blueprint definition file: <bean id="hdfsOsgiHelper" class="org.apache.camel.component.hdfs2.HdfsOsgiHelper"> <argument> <map> <entry key="file:///" value="org.apache.hadoop.fs.LocalFileSystem" /> <entry key="hdfs://localhost:9000/" value="org.apache.hadoop.hdfs.DistributedFileSystem" /> ... </map> </argument> </bean> <bean id="hdfs2" class="org.apache.camel.component.hdfs2.HdfsComponent" depends-on="hdfsOsgiHelper" /> This way Hadoop 2.x will have correct mapping of URI schemes to filesystem implementations.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-hdfs2</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "hdfs2://hostname[:port][/path][?options]", "hdfs2:hostName:port/path", "hdfs2://localhost/tmp/simple-file?splitStrategy=IDLE:1000,BYTES:5", "org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration(); conf.setClass(\"fs.file.impl\", org.apache.hadoop.fs.LocalFileSystem.class, FileSystem.class); conf.setClass(\"fs.hdfs.impl\", org.apache.hadoop.hdfs.DistributedFileSystem.class, FileSystem.class); FileSystem.get(\"file:///\", conf); FileSystem.get(\"hdfs://localhost:9000/\", conf);", "<bean id=\"hdfsOsgiHelper\" class=\"org.apache.camel.component.hdfs2.HdfsOsgiHelper\"> <argument> <map> <entry key=\"file:///\" value=\"org.apache.hadoop.fs.LocalFileSystem\" /> <entry key=\"hdfs://localhost:9000/\" value=\"org.apache.hadoop.hdfs.DistributedFileSystem\" /> </map> </argument> </bean> <bean id=\"hdfs2\" class=\"org.apache.camel.component.hdfs2.HdfsComponent\" depends-on=\"hdfsOsgiHelper\" />" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/hdfs2-component
2.8.9.2. Command Options for IPTables
2.8.9.2. Command Options for IPTables Rules for filtering packets are created using the iptables command. The following aspects of the packet are most often used as criteria: Packet Type - Specifies the type of packets the command filters. Packet Source/Destination - Specifies which packets the command filters based on the source or destination of the packet. Target - Specifies what action is taken on packets matching the above criteria. Refer to Section 2.8.9.2.4, "IPTables Match Options" and Section 2.8.9.2.5, "Target Options" for more information about specific options that address these aspects of a packet. The options used with specific iptables rules must be grouped logically, based on the purpose and conditions of the overall rule, for the rule to be valid. The remainder of this section explains commonly-used options for the iptables command. 2.8.9.2.1. Structure of IPTables Command Options Many iptables commands have the following structure: <table-name> - Specifies which table the rule applies to. If omitted, the filter table is used. <command> - Specifies the action to perform, such as appending or deleting a rule. <chain-name> - Specifies the chain to edit, create, or delete. <parameter>-<option> pairs - Parameters and associated options that specify how to process a packet that matches the rule. The length and complexity of an iptables command can change significantly, based on its purpose. For example, a command to remove a rule from a chain can be very short: iptables -D <chain-name> <line-number> In contrast, a command that adds a rule which filters packets from a particular subnet using a variety of specific parameters and options can be rather long. When constructing iptables commands, it is important to remember that some parameters and options require further parameters and options to construct a valid rule. This can produce a cascading effect, with the further parameters requiring yet more parameters. Until every parameter and option that requires another set of options is satisfied, the rule is not valid. Type iptables -h to view a comprehensive list of iptables command structures.
[ "iptables [ -t <table-name> ] <command> <chain-name> <parameter-1> <option-1> <parameter-n> <option-n>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-iptables-command_options_for_iptables
Chapter 4. Adding user preferences
Chapter 4. Adding user preferences You can change the default preferences for your profile to meet your requirements. You can set your default project, topology view (graph or list), editing medium (form or YAML), language preferences, and resource type. The changes made to the user preferences are automatically saved. 4.1. Setting user preferences You can set the default user preferences for your cluster. Procedure Log in to the OpenShift Container Platform web console using your login credentials. Use the masthead to access the user preferences under the user profile. In the General section: In the Theme field, you can set the theme that you want to work in. The console defaults to the selected theme each time you log in. In the Perspective field, you can set the default perspective you want to be logged in to. You can select the Administrator or the Developer perspective as required. If a perspective is not selected, you are logged into the perspective you last visited. In the Project field, select a project you want to work in. The console defaults to the project every time you log in. In the Topology field, you can set the topology view to default to the graph or list view. If not selected, the console defaults to the last view you used. In the Create/Edit resource method field, you can set a preference for creating or editing a resource. If both the form and YAML options are available, the console defaults to your selection. In the Language section, select Default browser language to use the default browser language settings. Otherwise, select the language that you want to use for the console. In the Notifications section, you can toggle display notifications created by users for specific projects on the Overview page or notification drawer. In the Applications section: You can view the default Resource type . For example, if the OpenShift Serverless Operator is installed, the default resource type is Serverless Deployment . Otherwise, the default resource type is Deployment . You can select another resource type to be the default resource type from the Resource Type field.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/web_console/adding-user-preferences
Chapter 11. Upgrading the Compute node operating system
Chapter 11. Upgrading the Compute node operating system You can upgrade the operating system on all of your Compute nodes to RHEL 9.2, or upgrade some Compute nodes while the rest remain on RHEL 8.4. Important If your deployment includes hyperconverged infrastructure (HCI) nodes, you must upgrade all HCI nodes to RHEL 9. For more information about upgrading to RHEL 9, see Upgrading Compute nodes to RHEL 9.2 . For information about the duration and impact of this upgrade procedure, see Upgrade duration and impact . Prerequisites Review Planning for a Compute node upgrade . 11.1. Selecting Compute nodes for upgrade testing The overcloud upgrade process allows you to either: Upgrade all nodes in a role. Upgrade individual nodes separately. To ensure a smooth overcloud upgrade process, it is useful to test the upgrade on a few individual Compute nodes in your environment before upgrading all Compute nodes. This ensures no major issues occur during the upgrade while maintaining minimal downtime to your workloads. Use the following recommendations to help choose test nodes for the upgrade: Select two or three Compute nodes for upgrade testing. Select nodes without any critical instances running. If necessary, migrate critical instances from the selected test Compute nodes to other Compute nodes. Review which migration scenarios are supported: Source Compute node RHEL version Destination Compute node RHEL version Supported/Not supported RHEL 8 RHEL 8 Supported RHEL 8 RHEL 9 Supported RHEL 9 RHEL 9 Supported RHEL 9 RHEL 8 Not supported 11.2. Upgrading all Compute nodes to RHEL 9.2 Upgrade all your Compute nodes to RHEL 9.2 to take advantage of the latest features and to reduce downtime. Prerequisites If your deployment includes hyper-converged infrastructure (HCI) nodes, place hosts in maintenance mode to prepare the Red Hat Ceph Storage cluster on each HCI node for reboot. For more information, see Placing hosts in the maintenance mode using the Ceph Orchestrator in The Ceph Operations Guide . Important If you are using RHOSP version 17.1.3 or earlier, before you run the system upgrade, ensure that no guests are running on the Compute hosts. Any guests that are running go into an error state. To avoid this issue, either live migrate your workloads or shut them down. For more information about live migration, see Live migrating an instance in Configuring the Compute service for instance creation . Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: In the container-image-prepare.yaml file, ensure that only the tags specified in the ContainerImagePrepare parameter are included, and the MultiRhelRoleContainerImagePrepare parameter is removed. For example: In the roles_data.yaml file, replace the OS::TripleO::Services::NovaLibvirtLegacy service with the OS::TripleO::Services::NovaLibvirt service that is required for RHEL 9.2. Include the -e system_upgrade.yaml argument and the other required -e environment file arguments in the overcloud_upgrade_prepare.sh script as shown in the following example: Run the overcloud_upgrade_prepare.sh script. Upgrade the operating system on the Compute nodes to RHEL 9.2. Use the --limit option with a comma-separated list of nodes that you want to upgrade. The following example upgrades the compute-0 , compute-1 , and compute-2 nodes. Replace <stack> with the name of your stack. Upgrade the containers on the Compute nodes to RHEL 9.2. Use the --limit option with a comma-separated list of nodes that you want to upgrade. The following example upgrades the compute-0 , compute-1 , and compute-2 nodes. 11.3. Upgrading Compute nodes to a Multi-RHEL environment You can upgrade a portion of your Compute nodes to RHEL 9.2 while the rest of your Compute nodes remain on RHEL 8.4. This upgrade process involves the following fundamental steps: Plan which nodes you want to upgrade to RHEL 9.2, and which nodes you want to remain on RHEL 8.4. Choose a role name for each role that you are creating for each batch of nodes, for example, ComputeRHEL-9.2 and ComputeRHEL-8.4 . Create roles that store the nodes that you want to upgrade to RHEL 9.2, or the nodes that you want to stay on RHEL 8.4. These roles can remain empty until you are ready to move your Compute nodes to a new role. You can create as many roles as you need and divide nodes among them any way you decide. For example: If your environment uses a role called ComputeSRIOV and you need to run a canary test to upgrade to RHEL 9.2, you can create a new ComputeSRIOVRHEL9 role and move the canary node to the new role. If your environment uses a role called ComputeOffload and you want to upgrade most nodes in that role to RHEL 9.2, but keep a few nodes on RHEL 8.4, you can create a new ComputeOffloadRHEL8 role to store the RHEL 8.4 nodes. You can then select the nodes in the original ComputeOffload role to upgrade to RHEL 9.2. Move the nodes from each Compute role to the new role. Upgrade the operating system on specific Compute nodes to RHEL 9.2. You can upgrade nodes in batches from the same role or multiple roles. Note In a Multi-RHEL environment, the deployment should continue to use the pc-i440fx machine type. Do not update the default to Q35. Migrating to the Q35 machine type is a separate, post-upgrade procedure to follow after all Compute nodes are upgraded to RHEL 9.2. For more information about migrating the Q35 machine type, see Updating the default machine type for hosts after an upgrade to RHOSP 17 . Use the following procedures to upgrade Compute nodes to a Multi-RHEL environment: Creating roles for Multi-RHEL Compute nodes Upgrading the Compute node operating system 11.3.1. Creating roles for Multi-RHEL Compute nodes Create new roles to store the nodes that you are upgrading to RHEL 9.2 or that are staying on RHEL 8.4, and move the nodes into the new roles. Procedure Create the relevant roles for your environment. In the role_data.yaml file, copy the source Compute role to use for the new role. Repeat this step for each additional role required. Roles can remain empty until you are ready to move your Compute nodes to the new roles. If you are creating a RHEL 8 role: Note Roles that contain nodes remaining on RHEL 8.4 must include the NovaLibvirtLegacy service. Replace <ComputeRHEL8> with the name of your RHEL 8.4 role. If you are creating a RHEL 9 role: Note Roles that contain nodes being upgraded to RHEL 9.2 must include the NovaLibvirt service. Replace OS::TripleO::Services::NovaLibvirtLegacy with OS::TripleO::Services::NovaLibvirt . Replace <ComputeRHEL9> with the name of your RHEL 9.2 role. Copy the overcloud_upgrade_prepare.sh file to the copy_role_Compute_param.sh file: Edit the copy_role_Compute_param.sh file to include the copy_role_params.py script. This script generates the environment file that contains the additional parameters and resources for the new role. For example: Replace <Compute_source_role> with the name of your source Compute role that you are copying. Replace <Compute_destination_role> with the name of your new role. Use the -o option to define the name of the output file that includes all the non-default values of the source Compute role for the new role. Replace <Compute_new_role_params.yaml> with the name of your output file. Run the copy_role_Compute_param.sh script: Move the Compute nodes from the source role to the new role: Note This tool includes the original /home/stack/tripleo-<stack>-baremetal-deployment.yaml file that you exported during the undercloud upgrade. The tool copies and renames the source role definition in the /home/stack/tripleo-<stack>-baremetal-deployment.yaml file. Then, it changes the hostname_format to prevent a conflict with the newly created destination role. The tool then moves the node from the source role to the destination role and changes the count values. Replace <stack> with the name of your stack. Replace <Compute_source_role> with the name of the source Compute role that contains the nodes that you are moving to your new role. Replace <Compute_destination_role> with the name of your new role. Replace <Compute-0> <Compute-1> <Compute-2> with the names of the nodes that you are moving to your new role. Reprovision the nodes to update the environment files in the stack with the new role location: Note The output baremetal-deployment.yaml file is the same file that is used in the overcloud_upgrade_prepare.sh file during overcloud adoption. Include any Compute roles that are remaining on RHEL 8.4 in the COMPUTE_ROLES parameter, and run the following script. For example, if you have a role called ComputeRHEL8 that contains the nodes that are remaining on RHEL 8.4, COMPUTE_ROLES = --role ComputeRHEL8 . Repeat this procedure to create additional roles and to move additional Compute nodes to those new roles. 11.3.2. Upgrading the Compute node operating system Upgrade the operating system on selected Compute nodes to RHEL 9.2. You can upgrade multiple nodes from different roles at the same time. Prerequisites Ensure that you have created the necessary roles for your environment. For more information about creating roles for a Multi-RHEL environment, see Creating roles for Multi-RHEL Compute nodes . Important If you are using RHOSP version 17.1.3 or earlier, before you run the system upgrade, ensure that no guests are running on the Compute hosts. Any guests that are running go into an error state. To avoid this issue, either live migrate your workloads or shut them down. For more information about live migration, see Live migrating an instance in Configuring the Compute service for instance creation . Procedure In the skip_rhel_release.yaml file, set the SkipRhelEnforcement parameter to false : Include the -e system_upgrade.yaml argument and the other required -e environment file arguments in the overcloud_upgrade_prepare.sh script as shown in the following example: Include the system_upgrade.yaml file with the upgrade-specific parameters (-e). Include the environment file that contains the parameters needed for the new role (-e). Replace <Compute_new_role_params.yaml> with the name of the environment file you created for your new role. If you are upgrading nodes from multiple roles at the same time, include the environment file for each new role that you created. Optional: Migrate your instances. For more information on migration strategies, see Migrating virtual machines between Compute nodes and Preparing to migrate . Run the overcloud_upgrade_prepare.sh script. Upgrade the operating system on specific Compute nodes. Use the --limit option with a comma-separated list of nodes that you want to upgrade. The following example upgrades the computerhel9-0 , computerhel9-1 , computerhel9-2 , and computesriov-42 nodes from the ComputeRHEL9 and ComputeSRIOV roles. Replace <stack> with the name of your stack. Upgrade the containers on the Compute nodes to RHEL 9.2. Use the --limit option with a comma-separated list of nodes that you want to upgrade. The following example upgrades the computerhel9-0 , computerhel9-1 , computerhel9-2 , and computesriov-42 nodes from the ComputeRHEL9 and ComputeSRIOV roles.
[ "source ~/stackrc", "parameter_defaults: ContainerImagePrepare: - tag_from_label: \"{version}-{release}\" set: namespace: name_prefix: name_suffix: tag: rhel_containers: false neutron_driver: ovn ceph_namespace: ceph_image: ceph_tag:", "openstack overcloud upgrade prepare --yes ... -e /home/stack/system_upgrade.yaml ...", "openstack overcloud upgrade run --yes --tags system_upgrade --stack <stack> --limit compute-0,compute-1,compute-2", "openstack overcloud upgrade run --yes --stack <stack> --limit compute-0,compute-1,compute-2", "name: <ComputeRHEL8> description: | Basic Compute Node role CountDefault: 1 rhsm_enforce_multios: 8.4 ServicesDefault: - OS::TripleO::Services::NovaLibvirtLegacy", "name: <ComputeRHEL9> description: | Basic Compute Node role CountDefault: 1 ServicesDefault: - OS::TripleO::Services::NovaLibvirt", "cp overcloud_upgrade_prepare.sh copy_role_Compute_param.sh", "/usr/share/openstack-tripleo-heat-templates/tools/copy_role_params.py --rolename-src <Compute_source_role> --rolename-dst <Compute_destination_role> -o <Compute_new_role_params.yaml> -e /home/stack/templates/internal.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/services/neutron-ovs.yaml -e /home/stack/templates/network/network-environment.yaml -e /home/stack/templates/inject-trust-anchor.yaml -e /home/stack/templates/hostnames.yml -e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml -e /home/stack/templates/nodes_data.yaml -e /home/stack/templates/debug.yaml -e /home/stack/templates/firstboot.yaml -e /home/stack/overcloud-params.yaml -e /home/stack/overcloud-deploy/overcloud/overcloud-network-environment.yaml -e /home/stack/overcloud_adopt/baremetal-deployment.yaml -e /home/stack/overcloud_adopt/generated-networks-deployed.yaml -e /home/stack/overcloud_adopt/generated-vip-deployed.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/nova-hw-machine-type-upgrade.yaml -e ~/containers-prepare-parameter.yaml", "sh /home/stack/copy_role_Compute_param.sh", "python3 /usr/share/openstack-tripleo-heat-templates/tools/baremetal_transition.py --baremetal-deployment /home/stack/tripleo-<stack>-baremetal-deployment.yaml --src-role <Compute_source_role> --dst-role <Compute_destination_role> <Compute-0> <Compute-1> <Compute-2>", "openstack overcloud node provision --stack <stack> --output /home/stack/overcloud_adopt/baremetal-deployment.yaml /home/stack/tripleo-<stack>-baremetal-deployment.yaml", "python3 /usr/share/openstack-tripleo-heat-templates/tools/multi-rhel-container-image-prepare.py USD{COMPUTE_ROLES} --enable-multi-rhel --excludes collectd --excludes nova-libvirt --minor-override \"{USD{EL8_TAGS}USD{EL8_NAMESPACE}USD{CEPH_OVERRIDE}USD{NEUTRON_DRIVER}\\\"no_tag\\\":\\\"not_used\\\"}\" --major-override \"{USD{EL9_TAGS}USD{NAMESPACE}USD{CEPH_OVERRIDE}USD{NEUTRON_DRIVER}\\\"no_tag\\\":\\\"not_used\\\"}\" --output-env-file /home/stack/containers-prepare-parameter.yaml", "parameter_defaults: SkipRhelEnforcement: false", "openstack overcloud upgrade prepare --yes -e /home/stack/system_upgrade.yaml -e /home/stack/<Compute_new_role_params.yaml>", "openstack overcloud upgrade run --yes --tags system_upgrade --stack <stack> --limit computerhel9-0,computerhel9-1,computerhel9-2,computesriov-42", "openstack overcloud upgrade run --yes --stack <stack> --limit computerhel9-0,computerhel9-1,computerhel9-2,computesriov-42" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/framework_for_upgrades_16.2_to_17.1/assembly_upgrading-the-compute-node-operating-system_upgrading-the-compute-node-operating-system
2.4. Saving a Configuration Change to a File
2.4. Saving a Configuration Change to a File When using the pcs command, you can use the -f option to save a configuration change to a file without affecting the active CIB. If you have previously configured a cluster and there is already an active CIB, you use the following command to save the raw xml a file. For example, the following command saves the raw xml from the CIB into a file name testfile . The following command creates a resource in the file testfile1 but does not add that resource to the currently running cluster configuration. You can push the current content of testfile to the CIB with the following command.
[ "pcs cluster cib filename", "pcs cluster cib testfile", "pcs -f testfile1 resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.120 cidr_netmask=24 op monitor interval=30s", "pcs cluster cib-push filename" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-pcsfilesave-haar
Chapter 5. AMQP Component
Chapter 5. AMQP Component Available as of Camel version 1.2 The amqp: component supports the AMQP 1.0 protocol using the JMS Client API of the Qpid project. In case you want to use AMQP 0.9 (in particular RabbitMQ) you might also be interested in the Camel RabbitMQ component. Please keep in mind that prior to the Camel 2.17.0 AMQP component supported AMQP 0.9 and above, however since Camel 2.17.0 it supports only AMQP 1.0. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-amqp</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency> 5.1. URI format amqp:[queue:|topic:]destinationName[?options] 5.2. AMQP Options You can specify all of the various configuration options of the JMS component after the destination name. The AMQP component supports 81 options which are listed below. Name Description Default Type configuration (advanced) To use a shared JMS configuration JmsConfiguration acceptMessagesWhile Stopping (consumer) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuick Stop (consumer) Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean acknowledgementMode (consumer) The JMS acknowledgement mode defined as an Integer. Allows you to set vendor-specific extensions to the acknowledgment mode.For the regular modes, it is preferable to use the acknowledgementModeName instead. int eagerLoadingOf Properties (consumer) Enables eager loading of JMS properties as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE AUTO_ ACKNOWLEDGE String autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. CACHE_AUTO String replyToCacheLevelName (producer) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. String clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int replyToConcurrent Consumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean deliveryMode (producer) Specifies the delivery mode to be used. Possibles values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Integer durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler errorHandlerLogging Level (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. WARN LoggingLevel errorHandlerLogStack Trace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false boolean exposeListenerSession (consumer) Specifies whether the listener session should be exposed when consuming messages. false boolean idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyToMaxConcurrent Consumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyOnTimeoutToMax ConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker.If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value true boolean messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker.If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value true boolean alwaysCopyMessage (producer) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set) false boolean useMessageIDAs CorrelationID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 0 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. 4 int pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long taskExecutor (consumer) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long transacted (transaction) Specifies whether to use transacted mode false boolean lazyCreateTransaction Manager (transaction) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction) The Spring transaction manager to use. PlatformTransaction Manager transactionName (transaction) The name of the transaction to use. String transactionTimeout (transaction) The timeout value of the transaction (in seconds), if using transacted mode. -1 int testConnectionOn Startup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean forceSendOriginal Message (producer) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long requestTimeoutChecker Interval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. false boolean transferFault (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed with a SOAP fault (not exception) on the consumer side, then the fault flag on Message#isFault() will be send back in the response as a JMS header with the key org.apache.camel.component.jms.JmsConstants#JMS_TRANSFER_FAULT#JMS_TRANSFER_FAULT. If the client is Camel, the returned fault flag will be set on the org.apache.camel.Message#setFault(boolean). You may want to enable this when using Camel components that support faults such as SOAP based such as cxf or spring-ws. false boolean jmsOperations (advanced) Allows you to use your own implementation of the org.springframework.jms.core.JmsOperations interface. Camel uses JmsTemplate as default. Can be used for testing purpose, but not used much as stated in the spring API docs. JmsOperations destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. ReplyToType preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean allowNullBody (producer) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean includeSentJMS MessageID (producer) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean includeAllJMSX Properties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean defaultTaskExecutor Type (consumer) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. DefaultTaskExecutor Type jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. JmsKeyFormatStrategy allowAdditionalHeaders (producer) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String queueBrowseStrategy (advanced) To use a custom QueueBrowseStrategy when browsing queues QueueBrowseStrategy messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy waitForProvision CorrelationToBeUpdated Counter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvision CorrelationToBeUpdated ThreadSleepingTime (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long correlationProperty (producer) Use this JMS property to correlate messages in InOut exchange pattern (request-reply) instead of JMSCorrelationID property. This allows you to exchange messages with systems that do not correlate messages using JMSCorrelationID JMS property. If used JMSCorrelationID will not be used or set by Camel. The value of here named property will be generated if not supplied in the header of the message under the same name. String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String streamMessageType Enabled (producer) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean formatDateHeadersTo Iso8601 (producer) Sets whether date headers should be formatted according to the ISO 8601 standard. false boolean headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The AMQP endpoint is configured using URI syntax: with the following path and query parameters: 5.2.1. Path Parameters (2 parameters): Name Description Default Type destinationType The kind of destination to use queue String destinationName Required Name of the queue or topic to use as destination String 5.2.2. Query Parameters (92 parameters): Name Description Default Type clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String jmsMessageType (common) Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. JmsMessageType testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE AUTO_ ACKNOWLEDGE String asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. CACHE_AUTO String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyTo (consumer) Provides an explicit ReplyTo destination, which overrides any incoming value of Message.getJMSReplyTo(). String replyToDeliveryPersistent (consumer) Specifies whether to use persistent delivery by default for replies. true boolean selector (consumer) Sets the JMS selector to use String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean acceptMessagesWhileStopping (consumer) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuickStop (consumer) Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean consumerType (consumer) The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Default ConsumerType defaultTaskExecutorType (consumer) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. DefaultTaskExecutor Type eagerLoadingOfProperties (consumer) Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern exposeListenerSession (consumer) Specifies whether the listener session should be exposed when consuming messages. false boolean replyToSameDestination Allowed (consumer) Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false boolean taskExecutor (consumer) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long deliveryMode (producer) Specifies the delivery mode to be used. Possibles values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Integer deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean formatDateHeadersToIso8601 (producer) Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false boolean preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 0 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. 4 int replyToConcurrentConsumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int replyToMaxConcurrent Consumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyToOnTimeoutMax ConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int replyToOverride (producer) Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. ReplyToType requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long allowAdditionalHeaders (producer) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String allowNullBody (producer) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean alwaysCopyMessage (producer) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set) false boolean correlationProperty (producer) When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String disableTimeToLive (producer) Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false boolean forceSendOriginalMessage (producer) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean includeSentJMSMessageID (producer) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean replyToCacheLevelName (producer) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. String replyToDestinationSelector Name (producer) Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String streamMessageTypeEnabled (producer) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean allowSerializedHeaders (advanced) Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. String mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker.If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value true boolean messageListenerContainer Factory (advanced) Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. MessageListener ContainerFactory messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker.If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value true boolean pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long requestTimeoutChecker Interval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. false boolean transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. false boolean transferFault (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed with a SOAP fault (not exception) on the consumer side, then the fault flag on Message#isFault() will be send back in the response as a JMS header with the key org.apache.camel.component.jms.JmsConstants#JMS_TRANSFER_FAULT#JMS_TRANSFER_FAULT. If the client is Camel, the returned fault flag will be set on the org.apache.camel.Message#setFault(boolean). You may want to enable this when using Camel components that support faults such as SOAP based such as cxf or spring-ws. false boolean useMessageIDAsCorrelation ID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean waitForProvisionCorrelation ToBeUpdatedCounter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvisionCorrelation ToBeUpdatedThreadSleeping Time (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String transacted (transaction) Specifies whether to use transacted mode false boolean lazyCreateTransaction Manager (transaction) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction) The Spring transaction manager to use. PlatformTransaction Manager transactionName (transaction) The name of the transaction to use. String transactionTimeout (transaction) The timeout value of the transaction (in seconds), if using transacted mode. -1 int 5.3. Spring Boot Auto-Configuration The component supports 81 options, which are listed below. Name Description Default Type camel.component.amqp.accept-messages-while-stopping Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false Boolean camel.component.amqp.acknowledgement-mode The JMS acknowledgement mode defined as an Integer. Allows you to set vendor-specific extensions to the acknowledgment mode.For the regular modes, it is preferable to use the acknowledgementModeName instead. Integer camel.component.amqp.acknowledgement-mode-name The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE AUTO_ ACKNOWLEDGE String camel.component.amqp.allow-additional-headers This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String camel.component.amqp.allow-null-body Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true Boolean camel.component.amqp.allow-reply-manager-quick-stop Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false Boolean camel.component.amqp.always-copy-message If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set) false Boolean camel.component.amqp.async-consumer Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false Boolean camel.component.amqp.async-start-listener Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false Boolean camel.component.amqp.async-stop-listener Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false Boolean camel.component.amqp.auto-startup Specifies whether the consumer container should auto-startup. true Boolean camel.component.amqp.cache-level Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. Integer camel.component.amqp.cache-level-name Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. CACHE_AUTO String camel.component.amqp.client-id Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String camel.component.amqp.concurrent-consumers Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 Integer camel.component.amqp.configuration To use a shared JMS configuration. The option is a org.apache.camel.component.jms.JmsConfiguration type. String camel.component.amqp.connection-factory The connection factory to be use. A connection factory must be configured either on the component or endpoint. The option is a javax.jms.ConnectionFactory type. String camel.component.amqp.correlation-property Use this JMS property to correlate messages in InOut exchange pattern (request-reply) instead of JMSCorrelationID property. This allows you to exchange messages with systems that do not correlate messages using JMSCorrelationID JMS property. If used JMSCorrelationID will not be used or set by Camel. The value of here named property will be generated if not supplied in the header of the message under the same name. String camel.component.amqp.default-task-executor-type Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. DefaultTaskExecutor Type camel.component.amqp.delivery-mode Specifies the delivery mode to be used. Possibles values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Integer camel.component.amqp.delivery-persistent Specifies whether persistent delivery is used by default. true Boolean camel.component.amqp.destination-resolver A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). The option is a org.springframework.jms.support.destination.DestinationResolver type. String camel.component.amqp.durable-subscription-name The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String camel.component.amqp.eager-loading-of-properties Enables eager loading of JMS properties as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties false Boolean camel.component.amqp.enabled Enable amqp component true Boolean camel.component.amqp.error-handler Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. The option is a org.springframework.util.ErrorHandler type. String camel.component.amqp.error-handler-log-stack-trace Allows to control whether stacktraces should be logged or not, by the default errorHandler. true Boolean camel.component.amqp.error-handler-logging-level Allows to configure the default errorHandler logging level for logging uncaught exceptions. LoggingLevel camel.component.amqp.exception-listener Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. The option is a javax.jms.ExceptionListener type. String camel.component.amqp.explicit-qos-enabled Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean camel.component.amqp.expose-listener-session Specifies whether the listener session should be exposed when consuming messages. false Boolean camel.component.amqp.force-send-original-message When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false Boolean camel.component.amqp.format-date-headers-to-iso8601 Sets whether date headers should be formatted according to the ISO 8601 standard. false Boolean camel.component.amqp.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. String camel.component.amqp.idle-consumer-limit Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 Integer camel.component.amqp.idle-task-execution-limit Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 Integer camel.component.amqp.include-all-j-m-s-x-properties Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false Boolean camel.component.amqp.include-sent-j-m-s-message-i-d Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false Boolean camel.component.amqp.jms-key-format-strategy Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. The option is a org.apache.camel.component.jms.JmsKeyFormatStrategy type. String camel.component.amqp.jms-operations Allows you to use your own implementation of the org.springframework.jms.core.JmsOperations interface. Camel uses JmsTemplate as default. Can be used for testing purpose, but not used much as stated in the spring API docs. The option is a org.springframework.jms.core.JmsOperations type. String camel.component.amqp.lazy-create-transaction-manager If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true Boolean camel.component.amqp.map-jms-message Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true Boolean camel.component.amqp.max-concurrent-consumers Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. Integer camel.component.amqp.max-messages-per-task The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 Integer camel.component.amqp.message-converter To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. The option is a org.springframework.jms.support.converter.MessageConverter type. String camel.component.amqp.message-created-strategy To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. The option is a org.apache.camel.component.jms.MessageCreatedStrategy type. String camel.component.amqp.message-id-enabled When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker.If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value true Boolean camel.component.amqp.message-timestamp-enabled Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker.If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value true Boolean camel.component.amqp.password Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.amqp.preserve-message-qos Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false Boolean camel.component.amqp.priority Values greater than 1 specify the message priority when sending (where 0 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. 4 Integer camel.component.amqp.pub-sub-no-local Specifies whether to inhibit the delivery of messages published by its own connection. false Boolean camel.component.amqp.queue-browse-strategy To use a custom QueueBrowseStrategy when browsing queues. The option is a org.apache.camel.component.jms.QueueBrowseStrategy type. String camel.component.amqp.receive-timeout The timeout for receiving messages (in milliseconds). 1000 Long camel.component.amqp.recovery-interval Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 Long camel.component.amqp.reply-on-timeout-to-max-concurrent-consumers Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 Integer camel.component.amqp.reply-to-cache-level-name Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. String camel.component.amqp.reply-to-concurrent-consumers Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 Integer camel.component.amqp.reply-to-max-concurrent-consumers Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. Integer camel.component.amqp.reply-to-type Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. ReplyToType camel.component.amqp.request-timeout The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 Long camel.component.amqp.request-timeout-checker-interval Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 Long camel.component.amqp.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.amqp.stream-message-type-enabled Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false Boolean camel.component.amqp.subscription-durable Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false Boolean camel.component.amqp.subscription-name Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String camel.component.amqp.subscription-shared Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false Boolean camel.component.amqp.task-executor Allows you to specify a custom task executor for consuming messages. The option is a org.springframework.core.task.TaskExecutor type. String camel.component.amqp.test-connection-on-startup Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false Boolean camel.component.amqp.time-to-live When sending messages, specifies the time-to-live of the message (in milliseconds). -1 Long camel.component.amqp.transacted Specifies whether to use transacted mode false Boolean camel.component.amqp.transaction-manager The Spring transaction manager to use. The option is a org.springframework.transaction.PlatformTransactionManager type. String camel.component.amqp.transaction-name The name of the transaction to use. String camel.component.amqp.transaction-timeout The timeout value of the transaction (in seconds), if using transacted mode. -1 Integer camel.component.amqp.transfer-exception If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. false Boolean camel.component.amqp.transfer-exchange You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. false Boolean camel.component.amqp.transfer-fault If enabled and you are using Request Reply messaging (InOut) and an Exchange failed with a SOAP fault (not exception) on the consumer side, then the fault flag on Message#isFault() will be send back in the response as a JMS header with the key org.apache.camel.component.jms.JmsConstants #JMS_TRANSFER_FAULT#JMS_TRANSFER_FAULT. If the client is Camel, the returned fault flag will be set on the org.apache.camel.Message#setFault(boolean). You may want to enable this when using Camel components that support faults such as SOAP based such as cxf or spring-ws. false Boolean camel.component.amqp.use-message-i-d-as-correlation-i-d Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false Boolean camel.component.amqp.username Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.amqp.wait-for-provision-correlation-to-be-updated-counter Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 Integer camel.component.amqp.wait-for-provision-correlation-to-be-updated-thread-sleeping-time Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 Long 5.4. Usage As AMQP component is inherited from JMS component, the usage of the former is almost identical to the latter: Using AMQP component // Consuming from AMQP queue from("amqp:queue:incoming"). to(...); // Sending message to the AMQP topic from(...). to("amqp:topic:notify"); 5.5. Configuring AMQP component Starting from the Camel 2.16.1 you can also use the AMQPComponent#amqp10Component(String connectionURI) factory method to return the AMQP 1.0 component with the pre-configured topic prefix: Creating AMQP 1.0 component AMQPComponent amqp = AMQPComponent.amqp10Component("amqp://guest:guest@localhost:5672"); Keep in mind that starting from the Camel 2.17 the AMQPComponent#amqp10Component(String connectionURI) factory method has been deprecated on the behalf of the AMQPComponent#amqpComponent(String connectionURI) : Creating AMQP 1.0 component AMQPComponent amqp = AMQPComponent.amqpComponent("amqp://localhost:5672"); AMQPComponent authorizedAmqp = AMQPComponent.amqpComponent("amqp://localhost:5672", "user", "password"); Starting from Camel 2.17, in order to automatically configure the AMQP component, you can also add an instance of org.apache.camel.component.amqp.AMQPConnectionDetails to the registry. For example for Spring Boot you just have to define bean: AMQP connection details auto-configuration @Bean AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672"); } @Bean AMQPConnectionDetails securedAmqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672", "username", "password"); } Likewise, you can also use CDI producer methods when using Camel-CDI AMQP connection details auto-configuration for CDI @Produces AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails("amqp://localhost:5672"); } You can also rely on the Camel properties to read the AMQP connection details. Factory method AMQPConnectionDetails.discoverAMQP() attempts to read Camel properties in a Kubernetes-like convention, just as demonstrated on the snippet below: AMQP connection details auto-configuration export AMQP_SERVICE_HOST = "mybroker.com" export AMQP_SERVICE_PORT = "6666" export AMQP_SERVICE_USERNAME = "username" export AMQP_SERVICE_PASSWORD = "password" ... @Bean AMQPConnectionDetails amqpConnection() { return AMQPConnectionDetails.discoverAMQP(); } Enabling AMQP specific options If you, for example, need to enable amqp.traceFrames you can do that by appending the option to your URI, like the following example: AMQPComponent amqp = AMQPComponent.amqpComponent("amqp://localhost:5672?amqp.traceFrames=true"); For reference take a look at the QPID JMS client configuration 5.6. Using topics To have using topics working with camel-amqp you need to configure the component to use topic:// as topic prefix, as shown below: <bean id="amqp" class="org.apache.camel.component.amqp.AmqpComponent"> <property name="connectionFactory"> <bean class="org.apache.qpid.jms.JmsConnectionFactory" factory-method="createFromURL"> <property name="remoteURI" value="amqp://localhost:5672" /> <property name="topicPrefix" value="topic://" /> <!-- only necessary when connecting to ActiveMQ over AMQP 1.0 --> </bean> </property> </bean> Keep in mind that both AMQPComponent#amqpComponent() methods and AMQPConnectionDetails pre-configure the component with the topic prefix, so you don't have to configure it explicitly. 5.7. See Also Configuring Camel Component Endpoint Getting Started
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-amqp</artifactId> <version>USD{camel.version}</version> <!-- use the same version as your Camel core version --> </dependency>", "amqp:[queue:|topic:]destinationName[?options]", "amqp:destinationType:destinationName", "// Consuming from AMQP queue from(\"amqp:queue:incoming\"). to(...); // Sending message to the AMQP topic from(...). to(\"amqp:topic:notify\");", "AMQPComponent amqp = AMQPComponent.amqp10Component(\"amqp://guest:guest@localhost:5672\");", "AMQPComponent amqp = AMQPComponent.amqpComponent(\"amqp://localhost:5672\"); AMQPComponent authorizedAmqp = AMQPComponent.amqpComponent(\"amqp://localhost:5672\", \"user\", \"password\");", "@Bean AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails(\"amqp://localhost:5672\"); } @Bean AMQPConnectionDetails securedAmqpConnection() { return new AMQPConnectionDetails(\"amqp://localhost:5672\", \"username\", \"password\"); }", "@Produces AMQPConnectionDetails amqpConnection() { return new AMQPConnectionDetails(\"amqp://localhost:5672\"); }", "export AMQP_SERVICE_HOST = \"mybroker.com\" export AMQP_SERVICE_PORT = \"6666\" export AMQP_SERVICE_USERNAME = \"username\" export AMQP_SERVICE_PASSWORD = \"password\" @Bean AMQPConnectionDetails amqpConnection() { return AMQPConnectionDetails.discoverAMQP(); }", "AMQPComponent amqp = AMQPComponent.amqpComponent(\"amqp://localhost:5672?amqp.traceFrames=true\");", "<bean id=\"amqp\" class=\"org.apache.camel.component.amqp.AmqpComponent\"> <property name=\"connectionFactory\"> <bean class=\"org.apache.qpid.jms.JmsConnectionFactory\" factory-method=\"createFromURL\"> <property name=\"remoteURI\" value=\"amqp://localhost:5672\" /> <property name=\"topicPrefix\" value=\"topic://\" /> <!-- only necessary when connecting to ActiveMQ over AMQP 1.0 --> </bean> </property> </bean>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/amqp-component
24.3. Problems After Installation
24.3. Problems After Installation 24.3.1. Remote Graphical Desktops and XDMCP If you have installed the X Window System and would like to log in to your Red Hat Enterprise Linux system using a graphical login manager, enable the X Display Manager Control Protocol (XDMCP). This protocol allows users to remotely log in to a desktop environment from any X Window System compatible client (such as a network-connected workstation or X11 terminal). To enable remote login using XDMCP, edit the /etc/gdm/custom.conf file on the Red Hat Enterprise Linux system with a text editor such as vi or nano . In the [xdcmp] section, add the line Enable=true , save the file, and exit the text editor. To enable this change, you will need to restart the X Window System. First, switch to runlevel 4: The graphical display will close, leaving only a terminal. When you reach the login: prompt, enter your username and password. Then, as root in the terminal, switch to runlevel 5 to return to the graphical interface and start the X11 server: From the client machine, start a remote X11 session using X . For example: The command connects to the remote X11 server via XDMCP (replace s390vm.example.com with the hostname of the remote X11 server) and displays the remote graphical login screen on display :1 of the X11 server system (usually accessible by using the Ctrl - Alt - F8 key combination). You can also access remote desktop sessions using a nested X11 server, which opens the remote desktop as a window in your current X11 session. Xnest allows users to open a remote desktop nested within their local X11 session. For example, run Xnest using the following command, replacing s390vm.example.com with the hostname of the remote X11 server:
[ "/sbin/init 4", "/sbin/init 5", "X :1 -query s390vm.example.com", "Xnest :1 -query s390vm.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch24s03
6.10. Host Devices
6.10. Host Devices 6.10.1. Adding a Host Device to a Virtual Machine Virtual machines can be directly attached to the host devices for improved performance if a compatible host has been configured for direct device assignment. Host devices are devices that are physically plugged into the host, including SCSI (for example tapes, disks, changers), PCI (for example NICs, GPUs, and HBAs), and USB (for example mice, cameras, and disks). Adding Host Devices to a Virtual Machine Click Compute Virtual Machines . Click a virtual machine's name to go to the details view. Click the Host Devices tab to list the host devices already attached to this virtual machine. A virtual machine can only have devices attached from the same host. If a virtual machine has attached devices from one host, and you attach a device from another host, the attached devices from the host will be automatically removed. Attaching host devices to a virtual machine requires the virtual machine to be in a Down state. If the virtual machine is running, the changes will not take effect until after the virtual machine has been shut down. Click Add device to open the Add Host Devices window. Use the Pinned Host drop-down menu to select a host. Use the Capability drop-down menu to list the pci , scsi , or usb_device host devices. Select the check boxes of the devices to attach to the virtual machine from the Available Host Devices pane and click the directional arrow button to transfer these devices to the Host Devices to be attached pane, creating a list of the devices to attach to the virtual machine. When you have transferred all desired host devices to the Host Devices to be attached pane, click OK to attach these devices to the virtual machine and close the window. These host devices will be attached to the virtual machine when the virtual machine is powered on. 6.10.2. Removing Host Devices from a Virtual Machine If you are removing all host devices directly attached to the virtual machine in order to add devices from a different host, you can instead add the devices from the desired host, which will automatically remove all of the devices already attached to the virtual machine. Procedure Click Compute Virtual Machines . Select a virtual machine to go to the details view. Click the Host Devices tab to list the host devices attached to the virtual machine. Select the host device to detach from the virtual machine, or hold Ctrl to select multiple devices, and click Remove device to open the Remove Host Device(s) window. Click OK to confirm and detach these devices from the virtual machine. 6.10.3. Pinning a Virtual Machine to Another Host You can use the Host Devices tab in the details view of a virtual machine to pin it to a specific host. If the virtual machine has any host devices attached to it, pinning it to another host automatically removes the host devices from the virtual machine. Pinning a Virtual Machine to a Host Click a virtual machine name and click the Host Devices tab. Click Pin to another host to open the Pin VM to Host window. Use the Host drop-down menu to select a host. Click OK to pin the virtual machine to the selected host.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-host_devices
Chapter 20. Utility functions for using ansi control chars in logs
Chapter 20. Utility functions for using ansi control chars in logs Utility functions for logging using ansi control characters. This lets you manipulate the cursor position and character color output and attributes of log messages.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/ansi.stp
Release notes for Red Hat build of OpenJDK 8.0.382
Release notes for Red Hat build of OpenJDK 8.0.382 Red Hat build of OpenJDK 8 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.382/index