title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
CI/CD overview
CI/CD overview OpenShift Container Platform 4.17 Contains information about CI/CD for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/cicd_overview/index
Chapter 5. New features for the Red Hat build of OpenJDK 21
Chapter 5. New features for the Red Hat build of OpenJDK 21 The initial release of Red Hat build of OpenJDK 21 includes new features that enhance the use of your Java applications. Red Hat build of OpenJDK 21 includes the following new features: UTF-8 by default For more information, see JEP 400: UTF-8 by Default . Simple web server For more information, see JEP 408: Simple Web Server . Code snippets in Java API documentation For more information, see JEP 413: Code Snippets in Java API Documentation . Reimplement core reflection with method handles For more information, see JEP 416: Reimplement Core Reflection with Method Handles . Internet-address resolution SPI For more information, see JEP 418: Internet-Address Resolution SPI . Linux/RISC-V port For more information, see JEP 422: Linux/RISC-V Port . Scoped values (Preview feature) For more information, see JEP 429: Scoped Values (Preview) . String templates (Preview feature) For more information, see JEP 430: String Templates (Preview) . Sequenced collections For more information, see JEP 431: Sequenced Collections . Generational Z Garbage Collector (ZGC) For more information, see JEP 439: Generational ZGC . Record patterns For more information, see JEP 440: Record Patterns . Pattern matching for switch For more information, see JEP 441: Pattern Matching for switch . Foreign function and memory (FFM) API (Third preview) For more information, see JEP 442: Foreign Function & Memory API (Third Preview) . Unnamed patterns and variables (Preview feature) For more information, see JEP 443: Unnamed Patterns and Variables (Preview) . Virtual threads For more information, see JEP 444: Virtual Threads . Unnamed classes and instance main methods (Preview feature) For more information, see JEP 445: Unnamed Classes and Instance Main Methods (Preview) . Scoped values (preview) For more information, see JEP 446: Scoped Values (Preview) . Vector API (sixth incubator) For more information, see JEP 448: Vector API (Sixth Incubator) . Key encapsulation mechanism API For more information, see JEP 452: Key Encapsulation Mechanism API . Structured concurrency (Preview feature) For more information, see JEP 453: Structured Concurrency (Preview) .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.1/openjdk21-features
9.9. Other Performance Tuning Considerations
9.9. Other Performance Tuning Considerations Although, you can find information about all JBoss Data Services settings using the Management CLI (see Section 10.1, "JBoss Data Virtualization Settings" ), this section provides some additional information about the max-source-rows setting. max-source-rows When using JBoss Data Services in a development environment, you may consider setting max-source-rows to a small value (for example, 10000) to prevent large amounts of data from being pulled from sources. Leaving the exception-on-max-source-rows property set to true will alert the developer through an exception that an attempt was made to retrieve more than the specified number of rows.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/other_performance_tuning_considerations1
Chapter 3. Creating an Eclipse Vert.x project with a POM file
Chapter 3. Creating an Eclipse Vert.x project with a POM file When you develop a basic Eclipse Vert.x application, you should create the following artifacts. We will create these artifacts in our first getting-started Eclipse Vert.x project. A Java class containing Eclipse Vert.x methods. A pom.xml file containing information required by Maven to build the application. The following procedure creates a simple Greeting application that returns Greetings! as response. Note Eclipse Vert.x supports builder images based on OpenJDK 8 and OpenJDK 11 for building and deploying your applications to OpenShift. Oracle JDK and OpenJDK 9 builder images are not supported. Prerequisites OpenJDK 8 or OpenJDK 11 is installed. Maven is installed. Procedure Create a new directory getting-started , and navigate to it. USD mkdir getting-started USD cd getting-started This is the root directory for the application. Create a directory structure src/main/java/com/example/ in the root directory, and navigate to it. USD mkdir -p src/main/java/com/example/ USD cd src/main/java/com/example/ Create a Java class file MyApp.java containing the application code. package com.example; import io.vertx.core.AbstractVerticle; import io.vertx.core.Promise; public class MyApp extends AbstractVerticle { @Override public void start(Promise<Void> promise) { vertx .createHttpServer() .requestHandler(r -> r.response().end("Greetings!")) .listen(8080, result -> { if (result.succeeded()) { promise.complete(); } else { promise.fail(result.cause()); } }); } } The application starts an HTTP Server on port 8080. When you send a re\u00adquest, it re\u00adturns Greetings! mes\u00adsage. Create a pom.xml file in the application root directory getting-started with the following content: In the <dependencyManagement> section, add the io.vertx:vertx-dependencies artifact. Specify the type as pom and scope as import . In the <project> section, under <properties> , specify the versions of Eclipse Vert.x and the Eclipse Vert.x Maven Plugin. Note Properties can be used to set values that change in every release. For example, versions of product or plugins. In the <project> section, under <plugin> , specify vertx-maven-plugin . The Eclipse Vert.x Maven Plugin is used to package your application. Include repositories and pluginRepositories to specify the repositories that contain the artifacts and plugins to build your application. The pom.xml contains the following artifacts: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>my-app</artifactId> <version>1.0.0-SNAPSHOT</version> <packaging>jar</packaging> <name>My Application</name> <description>Example application using Vert.x</description> <properties> <vertx.version>4.3.7.redhat-00002</vertx.version> <vertx-maven-plugin.version>1.0.24</vertx-maven-plugin.version> <vertx.verticle>com.example.MyApp</vertx.verticle> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <!-- Import dependencies from the Vert.x BOM. --> <dependencyManagement> <dependencies> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-dependencies</artifactId> <version>USD{vertx.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <!-- Specify the Vert.x artifacts that your application depends on. --> <dependencies> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-core</artifactId> </dependency> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-web</artifactId> </dependency> <!-- Test dependencies --> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-engine</artifactId> <version>5.4.0</version> <scope>test</scope> </dependency> </dependencies> <!-- Specify the repositories containing Vert.x artifacts. --> <repositories> <repository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories> <!-- Specify the version of the Maven Surefire plugin. --> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.0.0-M5</version> </plugin> <plugin> <!-- Configure your application to be packaged using the Vert.x Maven Plugin. --> <groupId>io.reactiverse</groupId> <artifactId>vertx-maven-plugin</artifactId> <version>USD{vertx-maven-plugin.version}</version> <executions> <execution> <id>vmp</id> <goals> <goal>initialize</goal> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project> Build the application using Maven from the root directory of the application. mvn vertx:run Verify that the application is running. Use curl or your browser to verify if your application is running at http://localhost:8080 and returns "Greetings!" as response. USD curl http://localhost:8080 Greetings!
[ "mkdir getting-started cd getting-started", "mkdir -p src/main/java/com/example/ cd src/main/java/com/example/", "package com.example; import io.vertx.core.AbstractVerticle; import io.vertx.core.Promise; public class MyApp extends AbstractVerticle { @Override public void start(Promise<Void> promise) { vertx .createHttpServer() .requestHandler(r -> r.response().end(\"Greetings!\")) .listen(8080, result -> { if (result.succeeded()) { promise.complete(); } else { promise.fail(result.cause()); } }); } }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>my-app</artifactId> <version>1.0.0-SNAPSHOT</version> <packaging>jar</packaging> <name>My Application</name> <description>Example application using Vert.x</description> <properties> <vertx.version>4.3.7.redhat-00002</vertx.version> <vertx-maven-plugin.version>1.0.24</vertx-maven-plugin.version> <vertx.verticle>com.example.MyApp</vertx.verticle> <maven.compiler.source>1.8</maven.compiler.source> <maven.compiler.target>1.8</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <!-- Import dependencies from the Vert.x BOM. --> <dependencyManagement> <dependencies> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-dependencies</artifactId> <version>USD{vertx.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <!-- Specify the Vert.x artifacts that your application depends on. --> <dependencies> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-core</artifactId> </dependency> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-web</artifactId> </dependency> <!-- Test dependencies --> <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-engine</artifactId> <version>5.4.0</version> <scope>test</scope> </dependency> </dependencies> <!-- Specify the repositories containing Vert.x artifacts. --> <repositories> <repository> <id>redhat-ga</id> <name>Red Hat GA Repository</name> <url>https://maven.repository.redhat.com/ga/</url> </repository> </repositories> <!-- Specify the version of the Maven Surefire plugin. --> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.0.0-M5</version> </plugin> <plugin> <!-- Configure your application to be packaged using the Vert.x Maven Plugin. --> <groupId>io.reactiverse</groupId> <artifactId>vertx-maven-plugin</artifactId> <version>USD{vertx-maven-plugin.version}</version> <executions> <execution> <id>vmp</id> <goals> <goal>initialize</goal> <goal>package</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </project>", "mvn vertx:run", "curl http://localhost:8080 Greetings!" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/getting_started_with_eclipse_vert.x/developing-vertx-application-with-a-pom-file_vertx
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_on_vmware_vsphere/making-open-source-more-inclusive
Chapter 4. Event-Driven Ansible
Chapter 4. Event-Driven Ansible Event-Driven Ansible is a new way to enhance and expand automation by improving IT speed and agility while enabling consistency and resilience. Event-Driven Ansible is designed for simplicity and flexibility. Known issues Both contributor and editor roles cannot set the AWX token. Only users with administrator roles can set the AWX token. Activation-job pods do not have request limits. The onboarding wizard does not request a controller token creation. Users cannot filter through a list of tokens under the Controller Token tab. Only the users with administrator rights can set or change their passwords. If there is a failure, an activation with restart policy set to Always is unable to restart the failed activation. Disabling and enabling an activation causes the restart count to increase by one count. This behavior results in an incorrect restart count. You must run Podman pods with memory limits. Users can add multiple tokens even when only the first AWX token is used. A race condition occurs when creating and rapidly deleting an activation causes errors. When users filter any list, only the items that are on the list get filtered. When ongoing activations start multiple jobs, a few jobs are not recorded in the audit logs. When a job template fails, a few key attributes are missing in the event payload. Restart policy in a Kubernetes deployment does not restart successful activations that are marked as failed. An incorrect status is reported for activations that are disabled or enabled. If the run_job_template action fails, the rule is not counted as executed. RHEL 9.2 activations cannot connect to the host. Restarting the Event-Driven Ansible server can cause activation states to become stale. Bulk deletion of rulebook activation lists is not consistent, and the deletion can be either successful or unsuccessful. When users access the detail screen of a rule audit, the related rulebook activation link is broken. Long running activations with loads of events can cause an out of disk space issue. Resolved in installer release 2.4-6 . Certain characters, such as hyphen (-), forward slash (/), and period (.), are not supported in the event keys. Resolved in installer release 2.4-3 . When there are more activations than available workers, disabling the activations incorrectly shows them in running state. Resolved in installer release 2.4-3 . Event-Driven Ansible activation pods are running out of memory on RHEL 9. Resolved in installer release 2.4-3 . When all workers are busy with activation processes, other asynchronous tasks are not executed, such as importing projects. Resolved in installer release 2.4-3 .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_release_notes/eda-24-intro
Nodes
Nodes OpenShift Container Platform 4.12 Configuring and managing nodes in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/nodes/index
User Guide Volume 2: Modeshape Tools
User Guide Volume 2: Modeshape Tools Red Hat JBoss Data Virtualization 6.4 This guide is for developers. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_2_modeshape_tools/index
Chapter 24. Additional resources
Chapter 24. Additional resources Red Hat Enterprise Linux technology capabilities and limits Red Hat Enterprise Linux Life Cycle document RHEL 8 product documentation RHEL 8.0 Release Notes RHEL 8 Package manifest Upgrading from RHEL 7 to RHEL 8 Application Compatibility Guide RHEL 7 Migration Planning Guide Customer Portal Labs Red Hat Insights Getting the most out of your support experience
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/related-information-considerations-in-adopting-rhel-8
Network APIs
Network APIs OpenShift Container Platform 4.17 Reference guide for network APIs Red Hat OpenShift Documentation Team
[ "Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]", "type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`", "// other fields }", "type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`", "// other fields }", "type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"`", "// other fields }", "Name: \"mysvc\", Subsets: [ { Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }, { Addresses: [{\"ip\": \"10.10.3.3\"}], Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}] }, ]", "{ Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}], Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}] }", "a: [ 10.10.1.1:8675, 10.10.2.2:8675 ], b: [ 10.10.1.1:309, 10.10.2.2:309 ]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/network_apis/index
Extension APIs
Extension APIs OpenShift Container Platform 4.12 Reference guide for extension APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/extension_apis/index
Configuring cloud integrations for Red Hat services
Configuring cloud integrations for Red Hat services Red Hat Hybrid Cloud Console 1-latest How to link your Red Hat account to a public cloud Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/configuring_cloud_integrations_for_red_hat_services/index
Chapter 4. Installing with the Assisted Installer API
Chapter 4. Installing with the Assisted Installer API After you ensure the cluster nodes and network requirements are met, you can begin installing the cluster by using the Assisted Installer API. To use the API, you must perform the following procedures: Set up the API authentication. Configure the pull secret. Register a new cluster definition. Create an infrastructure environment for the cluster. Once you perform these steps, you can modify the cluster definition, create discovery ISOs, add hosts to the cluster, and install the cluster. This document does not cover every endpoint of the Assisted Installer API , but you can review all of the endpoints in the API viewer or the swagger.yaml file. 4.1. Generating the offline token Download the offline token from the Assisted Installer web console. You will use the offline token to set the API token. Prerequisites Install jq . Log in to the OpenShift Cluster Manager as a user with cluster creation privileges. Procedure In the menu, click Downloads . In the Tokens section under OpenShift Cluster Manager API Token , click View API Token . Click Load Token . Important Disable pop-up blockers. In the Your API token section, copy the offline token. In your terminal, set the offline token to the OFFLINE_TOKEN variable: USD export OFFLINE_TOKEN=<copied_token> Tip To make the offline token permanent, add it to your profile. (Optional) Confirm the OFFLINE_TOKEN variable definition. USD echo USD{OFFLINE_TOKEN} 4.2. Authenticating with the REST API API calls require authentication with the API token. Assuming you use API_TOKEN as a variable name, add -H "Authorization: Bearer USD{API_TOKEN}" to API calls to authenticate with the REST API. Note The API token expires after 15 minutes. Prerequisites You have generated the OFFLINE_TOKEN variable. Procedure On the command line terminal, set the API_TOKEN variable using the OFFLINE_TOKEN to validate the user. USD export API_TOKEN=USD( \ curl \ --silent \ --header "Accept: application/json" \ --header "Content-Type: application/x-www-form-urlencoded" \ --data-urlencode "grant_type=refresh_token" \ --data-urlencode "client_id=cloud-services" \ --data-urlencode "refresh_token=USD{OFFLINE_TOKEN}" \ "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \ | jq --raw-output ".access_token" \ ) Confirm the API_TOKEN variable definition: USD echo USD{API_TOKEN} Create a script in your path for one of the token generating methods. For example: USD vim ~/.local/bin/refresh-token export API_TOKEN=USD( \ curl \ --silent \ --header "Accept: application/json" \ --header "Content-Type: application/x-www-form-urlencoded" \ --data-urlencode "grant_type=refresh_token" \ --data-urlencode "client_id=cloud-services" \ --data-urlencode "refresh_token=USD{OFFLINE_TOKEN}" \ "https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token" \ | jq --raw-output ".access_token" \ ) Then, save the file. Change the file mode to make it executable: USD chmod +x ~/.local/bin/refresh-token Refresh the API token: USD source refresh-token Verify that you can access the API by running the following command: USD curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H "Authorization: Bearer USD{API_TOKEN}" | jq Example output { "release_tag": "v2.11.3", "versions": { "assisted-installer": "registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-211", "assisted-installer-controller": "registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-266", "assisted-installer-service": "quay.io/app-sre/assisted-service:78d113a", "discovery-agent": "registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-195" } } 4.3. Configuring the pull secret Many of the Assisted Installer API calls require the pull secret. Download the pull secret to a file so that you can reference it in API calls. The pull secret is a JSON object that will be included as a value within the request's JSON object. The pull secret JSON must be formatted to escape the quotes. For example: Before {"auths":{"cloud.openshift.com": ... After {\"auths\":{\"cloud.openshift.com\": ... Procedure In the menu, click OpenShift . In the submenu, click Downloads . In the Tokens section under Pull secret , click Download . To use the pull secret from a shell variable, execute the following command: USD export PULL_SECRET=USD(cat ~/Downloads/pull-secret.txt | jq -R .) To slurp the pull secret file using jq , reference it in the pull_secret variable, piping the value to tojson to ensure that it is properly formatted as escaped JSON. For example: USD curl https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' 1 { "name": "testcluster", "control_plane_count": "3", "openshift_version": "4.11", "pull_secret": USDpull_secret[0] | tojson, 2 "base_dns_domain": "example.com" } ')" 1 Slurp the pull secret file. 2 Format the pull secret to escaped JSON format. Confirm the PULL_SECRET variable definition: USD echo USD{PULL_SECRET} 4.4. Generating the SSH public key During the installation of OpenShift Container Platform, you can optionally provide an SSH public key to the installation program. This is useful for initiating an SSH connection to a remote node when troubeshooting an installation error. If you do not have an existing SSH key pair on your local machine to use for the authentication, create one now. Prerequisites Generate the OFFLINE_TOKEN and API_TOKEN variables. Procedure From the root user in your terminal, get the SSH public key: USD cat /root/.ssh/id_rsa.pub Set the SSH public key to the CLUSTER_SSHKEY variable: USD CLUSTER_SSHKEY=<downloaded_ssh_key> Confirm the CLUSTER_SSHKEY variable definition: USD echo USD{CLUSTER_SSHKEY} 4.5. Registering a new cluster To register a new cluster definition with the API, use the /v2/clusters endpoint. The following parameters are mandatory: name openshift-version pull_secret cpu_architecture See the cluster-create-params model in the API viewer for details on the fields you can set when registering a new cluster. When setting the olm_operators field, see Additional Resources for details on installing Operators. Prerequisites You have generated a valid API_TOKEN . Tokens expire every 15 minutes. You have downloaded the pull secret. Optional: You have assigned the pull secret to the USDPULL_SECRET variable. Procedure Refresh the API token: USD source refresh-token Register a new cluster by using one of the following methods: Register the cluster by referencing the pull secret file in the request: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' \ { \ "name": "testcluster", \ "openshift_version": "4.16", \ 1 "control_plane_count": "<number>", \ 2 "cpu_architecture" : "<architecture_name>", \ 3 "base_dns_domain": "example.com", \ "pull_secret": USDpull_secret[0] | tojson \ } \ ')" | jq '.id' Register the cluster by doing the following: Writing the configuration to a JSON file: USD cat << EOF > cluster.json { "name": "testcluster", "openshift_version": "4.16", 1 "control_plane_count": "<number>", 2 "base_dns_domain": "example.com", "network_type": "examplenetwork", "cluster_network_cidr":"11.111.1.0/14" "cluster_network_host_prefix": 11, "service_network_cidr": "111.11.1.0/16", "api_vips":[{"ip": ""}], "ingress_vips": [{"ip": ""}], "vip_dhcp_allocation": false, "additional_ntp_source": "clock.redhat.com,clock2.redhat.com", "ssh_public_key": "USDCLUSTER_SSHKEY", "pull_secret": USDPULL_SECRET } EOF Referencing it in the request: USD curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/clusters" \ -d @./cluster.json \ -H "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.id' 1 1 Pay attention to the following: To install the latest OpenShift version, use the x.y format, such as 4.16 for version 4.16.10. To install a specific OpenShift version, use the x.y.z format, such as 4.16.3 for version 4.16.3. To install a mixed-architecture cluster, add the -multi extension, such as 4.16-multi for the latest version or 4.16.3-multi for a specific version. If you are booting from an iSCSI drive, enter OpenShift Container Platform version 4.15 or later. 2 2 Set the number of control plane nodes to 1 for a single-node OpenShift cluster, or to 3 , 4 , or 5 for a multi-node OpenShift Container Platform cluster. The system supports 4 , or 5 control plane nodes from OpenShift Container Platform 4.18 and later, on a bare metal or user-managed networking platform with an x86_64 CPU architecture. For details, see About specifying the number of control plane nodes . 3 Valid values are x86_64 , arm64 , ppc64le , s390x , or multi . Specify multi for a mixed-architecture cluster. Assign the returned cluster_id to the CLUSTER_ID variable and export it: USD export CLUSTER_ID=<cluster_id> Note If you close your terminal session, you need to export the CLUSTER_ID variable again in a new terminal session. Check the status of the new cluster: USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq Once you register a new cluster definition, create the infrastructure environment for the cluster. Note You cannot see the cluster configuration settings in the Assisted Installer user interface until you create the infrastructure environment. Additional resources Modifying a cluster Installing a mixed-architecture cluster Optional: Installing on Nutanix Optional: Installing on vSphere Optional: Installing on Oracle Cloud Infrastructure 4.5.1. Installing Operators You can install the following Operators when you register a new cluster: OpenShift Virtualization Operator Note Currently, OpenShift Virtualization is not supported on IBM Z(R) and IBM Power(R). The OpenShift Virtualization Operator requires backend storage, and automatically activates Local Storage Operator (LSO) by default in the background. Selecting an alternative storage manager, such as LVM Storage,overrides the default Local Storage Operator. Migration Toolkit for Virtualization Operator Note Specifying the Migration Toolkit for Virtualization (MTV) Operator automatically activates the OpenShift Virtualization Operator. For a Single-node OpenShift installation, the Assisted Installer also activates the LVM Storage Operator. Multicluster engine Operator Note Deploying the multicluster engine without OpenShift Data Foundation results in the following storage configurations: Multi-node cluster: No storage is configured. You must configure storage after the installation. Single-node OpenShift: LVM Storage is installed. OpenShift Data Foundation Operator LVM Storage Operator OpenShift AI Operator Important The integration of the OpenShift AI Operator into the Assisted Installer is a Developer Preview feature only. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to upcoming product features in advance of their possible inclusion in a Red Hat product offering, enabling customers to test functionality and provide feedback during the development process. These features might not have any documentation, are subject to change or removal at any time, and testing is limited. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA. If you require advanced options, install the Operators after you have installed the cluster. This step is optional. Prerequisites You have reviewed Customizing your installation using Operators for an overview of each operator, together with its prerequisites and dependencies. Procedure Run the following command: USD curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "name": "testcluster", "openshift_version": "4.15", "cpu_architecture" : "x86_64", "base_dns_domain": "example.com", "olm_operators": [ { "name": "mce" } 1 , { "name": "odf" } ] "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' 1 Specify cnv for OpenShift Virtualization, mtv for Migration Toolkit for Virtualization, mce for multicluster engine, odf for OpenShift Data Foundation, lvm for LVM Storage or openshift-ai for OpenShift AI. Selecting an Operator automatically activates any dependent Operators. 4.5.2. Scheduling workloads to run on control plane nodes Use the schedulable_masters attribute to enable workloads to run on control plane nodes. Prerequisites You have generated a valid API_TOKEN . Tokens expire every 15 minutes. You have created a USDPULL_SECRET variable. You are installing OpenShift Container Platform 4.14 or later. Procedure Follow the instructions for installing Assisted Installer using the Assisted Installer API. When you reach the step for registering a new cluster, set the schedulable_masters attribute as follows: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "schedulable_masters": true 1 } ' | jq 1 Enables the scheduling of workloads on the control plane nodes. 4.6. Modifying a cluster To modify a cluster definition with the API, use the /v2/clusters/{cluster_id} endpoint. Modifying a cluster resource is a common operation for adding settings such as changing the network type or enabling user-managed networking. See the v2-cluster-update-params model in the API viewer for details on the fields you can set when modifying a cluster definition. You can add or remove Operators from a cluster resource that has already been registered. Note To create partitions on nodes, see Configuring storage on nodes in the OpenShift Container Platform documentation. Prerequisites You have created a new cluster resource. Procedure Refresh the API token: USD source refresh-token Modify the cluster. For example, change the SSH key: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "ssh_public_key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZrD4LMkAEeoU2vShhF8VM+cCZtVRgB7tqtsMxms2q3TOJZAgfuqReKYWm+OLOZTD+DO3Hn1pah/mU3u7uJfTUg4wEX0Le8zBu9xJVym0BVmSFkzHfIJVTn6SfZ81NqcalisGWkpmkKXVCdnVAX6RsbHfpGKk9YPQarmRCn5KzkelJK4hrSWpBPjdzkFXaIpf64JBZtew9XVYA3QeXkIcFuq7NBuUH9BonroPEmIXNOa41PUP1IWq3mERNgzHZiuU8Ks/pFuU5HCMvv4qbTOIhiig7vidImHPpqYT/TCkuVi5w0ZZgkkBeLnxWxH0ldrfzgFBYAxnpTU8Ih/4VhG538Ix1hxPaM6cXds2ic71mBbtbSrk+zjtNPaeYk1O7UpcCw4jjHspU/rVV/DY51D5gSiiuaFPBMucnYPgUxy4FMBFfGrmGLIzTKiLzcz0DiSz1jBeTQOX++1nz+KDLBD8CPdi5k4dq7lLkapRk85qdEvgaG5RlHMSPSS3wDrQ51fD8= user@hostname" } ' | jq 4.6.1. Modifying Operators by using the API You can add or remove Operators from a cluster resource that has already been registered as part of a installation. This is only possible before you start the OpenShift Container Platform installation. You set the required Operator definition by using the PATCH method for the /v2/clusters/{cluster_id} endpoint. Prerequisites You have refreshed the API token. You have exported the CLUSTER_ID as an environment variable. Procedure Run the following command to modify the Operators: USD curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "olm_operators": [{"name": "mce"}, {"name": "cnv"}], 1 } ' | jq '.id' 1 Specify cnv for OpenShift Virtualization, mtv for Migration Toolkit for Virtualization, mce for multicluster engine, odf for Red Hat OpenShift Data Foundation, lvm for Logical Volume Manager Storage, or openshift-ai for OpenShift AI. To remove a previously installed Operator, exclude it from the list of values. To remove all previously installed Operators, specify an empty array: "olm_operators": [] . Example output { <various cluster properties>, "monitored_operators": [ { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "console", "operator_type": "builtin", "status_updated_at": "0001-01-01T00:00:00.000Z", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "cvo", "operator_type": "builtin", "status_updated_at": "0001-01-01T00:00:00.000Z", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "mce", "namespace": "multicluster-engine", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "multicluster-engine", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "cnv", "namespace": "openshift-cnv", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "hco-operatorhub", "timeout_seconds": 3600 }, { "cluster_id": "b5259f97-be09-430e-b5eb-d78420ee509a", "name": "lvm", "namespace": "openshift-local-storage", "operator_type": "olm", "status_updated_at": "0001-01-01T00:00:00.000Z", "subscription_name": "local-storage-operator", "timeout_seconds": 4200 } ], <more cluster properties> Note The output is the description of the new cluster state. The monitored_operators property in the output contains Operators of two types: "operator_type": "builtin" : Operators of this type are an integral part of OpenShift Container Platform. "operator_type": "olm" : Operators of this type are added manually by a user or automatically, as a dependency. In this example, the LVM Storage Operator is added automatically as a dependency of OpenShift Virtualization. Additional resources See Customizing your installation using Operators for an overview of each operator, together with its prerequisites and dependencies. 4.7. Registering a new infrastructure environment Once you register a new cluster definition with the Assisted Installer API, create an infrastructure environment using the v2/infra-envs endpoint. Registering a new infrastructure environment requires the following settings: name pull_secret cpu_architecture See the infra-env-create-params model in the API viewer for details on the fields you can set when registering a new infrastructure environment. You can modify an infrastructure environment after you create it. As a best practice, consider including the cluster_id when creating a new infrastructure environment. The cluster_id will associate the infrastructure environment with a cluster definition. When creating the new infrastructure environment, the Assisted Installer will also generate a discovery ISO. Prerequisites You have generated a valid API_TOKEN . Tokens expire every 15 minutes. You have downloaded the pull secret. Optional: You have registered a new cluster definition and exported the cluster_id . Procedure Refresh the API token: USD source refresh-token Register a new infrastructure environment. Provide a name, preferably something including the cluster name. This example provides the cluster ID to associate the infrastructure environment with the cluster resource. The following example specifies the image_type . You can specify either full-iso or minimal-iso . The default value is minimal-iso . Optional: You can register a new infrastructure environment by slurping the pull secret file in the request: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt \ --arg cluster_id USD{CLUSTER_ID} ' { "name": "testcluster-infra-env", "image_type":"full-iso", "cluster_id": USDcluster_id, "cpu_architecture" : "<architecture_name>", 1 "pull_secret": USDpull_secret[0] | tojson } ')" | jq '.id' Note 1 Valid values are x86_64 , arm64 , ppc64le , s390x , and multi . Optional: You can register a new infrastructure environment by writing the configuration to a JSON file and then referencing it in the request: USD cat << EOF > infra-envs.json { "name": "testcluster", "pull_secret": USDPULL_SECRET, "proxy": { "http_proxy": "", "https_proxy": "", "no_proxy": "" }, "ssh_authorized_key": "USDCLUSTER_SSHKEY", "image_type": "full-iso", "cluster_id": "USD{CLUSTER_ID}", "openshift_version": "4.11" } EOF USD curl -s -X POST "https://api.openshift.com/api/assisted-install/v2/infra-envs" -d @./infra-envs.json -H "Content-Type: application/json" -H "Authorization: Bearer USDAPI_TOKEN" | jq '.id' Assign the returned id to the INFRA_ENV_ID variable and export it: USD export INFRA_ENV_ID=<id> Note Once you create an infrastructure environment and associate it to a cluster definition via the cluster_id , you can see the cluster settings in the Assisted Installer web user interface. If you close your terminal session, you need to re-export the id in a new terminal session. 4.8. Modifying an infrastructure environment You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. Modifying an infrastructure environment is a common operation for adding settings such as networking, SSH keys, or ignition configuration overrides. See the infra-env-update-params model in the API viewer for details on the fields you can set when modifying an infrastructure environment. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO. Prerequisites You have created a new infrastructure environment. Procedure Refresh the API token: USD source refresh-token Modify the infrastructure environment: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "image_type":"minimal-iso", "pull_secret": USDpull_secret[0] | tojson } ')" | jq 4.8.1. Adding kernel arguments Providing kernel arguments to the Red Hat Enterprise Linux CoreOS (RHCOS) kernel via the Assisted Installer means passing specific parameters or options to the kernel at boot time, particularly when you cannot customize the kernel parameters of the discovery ISO. Kernel parameters can control various aspects of the kernel's behavior and the operating system's configuration, affecting hardware interaction, system performance, and functionality. Kernel arguments are used to customize or inform the node's RHCOS kernel about the hardware configuration, debugging preferences, system services, and other low-level settings. The RHCOS installer kargs modify command supports the append , delete , and replace options. You can modify an infrastructure environment using the /v2/infra-envs/{infra_env_id} endpoint. When modifying the new infrastructure environment, the Assisted Installer will also re-generate the discovery ISO. Procedure Refresh the API token: USD source refresh-token Modify the kernel arguments: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID} \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d "USD(jq --null-input \ --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { "kernel_arguments": [{ "operation": "append", "value": "<karg>=<value>" }], 1 "image_type":"minimal-iso", "pull_secret": USDpull_secret[0] | tojson } ')" | jq 1 Replace <karg> with the the kernel argument and <value> with the kernal argument value. For example: rd.net.timeout.carrier=60 . You can specify multiple kernel arguments by adding a JSON object for each kernel argument. 4.9. Adding hosts After configuring the cluster resource and infrastructure environment, download the discovery ISO image. You can choose from two images: Full ISO image: Use the full ISO image when booting must be self-contained. The image includes everything needed to boot and start the Assisted Installer agent. The ISO image is about 1GB in size. This is the recommended method for the s390x architecture when installing with RHEL KVM. Minimal ISO image: Use the minimal ISO image when the virtual media connection has limited bandwidth. This is the default setting. The image includes only what the agent requires to boot a host with networking. The majority of the content is downloaded upon boot. The ISO image is about 100MB in size. This option is mandatory in the following scenarios: If you are installing OpenShift Container Platform on Oracle Cloud Infrastructure. If you are installing OpenShift Container Platform on iSCSI boot volumes. Note Currently, ISO images are supported on IBM Z(R) ( s390x ) with KVM, iPXE with z/VM, and LPAR (both static and DPM). For details, see Booting hosts using iPXE . You can boot hosts with the discovery image using three methods. For details, see Booting hosts with the discovery image . Prerequisites You have created a cluster. You have created an infrastructure environment. You have completed the configuration. If the cluster hosts are behind a firewall that requires the use of a proxy, you have configured the username, password, IP address and port for the HTTP and HTTPS URLs of the proxy server. Note The proxy username and password must be URL-encoded. You have selected an image type or will use the default minimal-iso . Procedure Configure the discovery image if needed. For details, see Configuring the discovery image . Refresh the API token: USD source refresh-token Get the download URL: USD curl -H "Authorization: Bearer USD{API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/downloads/image-url Example output { "expires_at": "2024-02-07T20:20:23.000Z", "url": "https://api.openshift.com/api/assisted-images/bytoken/<TOKEN>/<OCP_VERSION>/<CPU_ARCHITECTURE>/<FULL_OR_MINIMAL_IMAGE>.iso" } Download the discovery image: USD wget -O discovery.iso <url> Replace <url> with the download URL from the step. Boot the host(s) with the discovery image. Assign a role to host(s). Additional resources Configuring the discovery image Booting hosts with the discovery image Adding hosts on Nutanix with the API Adding hosts on vSphere Assigning roles to hosts Booting hosts using iPXE 4.10. Modifying hosts After adding hosts, modify the hosts as needed. The most common modifications are to the host_name and the host_role parameters. You can modify a host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. See the host-update-params model in the API viewer for details on the fields you can set when modifying a host. A host might be one of two roles: master : A host with the master role will operate as a control plane host. worker : A host with the worker role will operate as a worker host. By default, the Assisted Installer sets a host to auto-assign , which means the installation program determines whether the host is a master or worker role automatically. Use the following procedure to set the host's role: Prerequisites You have added hosts to the cluster. Procedure Refresh the API token: USD source refresh-token Get the host IDs: USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.host_networks[].host_ids' Modify the host: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ 1 -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"worker" "host_name" : "worker-1" } ' | jq 1 Replace <host_id> with the ID of the host. 4.10.1. Modifying storage disk configuration Each host retrieved during host discovery can have multiple storage disks. You can optionally modify the default configurations for each disk. Important Starting from OpenShift Container Platform 4.16, you can install a cluster on a single iSCSI boot device using the Assisted Installer. Although OpenShift Container Platform also supports multipathing for iSCSI, this feature is currently not available for Assisted Installer deployments. Prerequisites Configure the cluster and discover the hosts. For details, see Additional resources . Viewing the storage disks You can view the hosts in your cluster, and the disks on each host. This enables you to perform actions on a specific disk. Procedure Refresh the API token: USD source refresh-token Get the host IDs for the cluster: USD curl -s "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.host_networks[].host_ids' Example output USD "1022623e-7689-8b2d-7fbd-e6f4d5bb28e5" Note This is the ID of a single host. Multiple host IDs are separated by commas. Get the disks for a specific host: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ 1 -H "Authorization: Bearer USD{API_TOKEN}" \ | jq '.inventory | fromjson | .disks' 1 Replace <host_id> with the ID of the relevant host. Example output USD [ { "by_id": "/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506", "by_path": "/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0", "drive_type": "HDD", "has_uuid": true, "hctl": "1:2:0:0", "id": "/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506", "installation_eligibility": { "eligible": true, "not_eligible_reasons": null }, "model": "PERC_H710P", "name": "sda", "path": "/dev/sda", "serial": "0006a560141adc3a2d00fb8af960f681", "size_bytes": 6595056500736, "vendor": "DELL", "wwn": "0x6c81f660f98afb002d3adc1a1460a506" } ] Note This is the output for one disk. It contains the disk_id and installation_eligibility properties for the disk. Changing the installation disk The Assisted Installer randomly assigns an installation disk by default. If there are multiple storage disks for a host, you can select a different disk to be the installation disk. This automatically unassigns the disk. You can select any disk whose installation_eligibility property is eligible: true to be the installation disk. Note Red Hat Enterprise Linux CoreOS (RHCOS) supports multipathing over Fibre Channel on the installation disk, allowing stronger resilience to hardware failure to achieve higher host availability. Multipathing is enabled by default in the agent ISO image, with an /etc/multipath.conf configuration. For details, see Modifying the DM Multipath configuration file . Procedure Get the host and storage disk IDs. For details, see Viewing the storage disks . Optional: Identify the current installation disk: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ 1 -H "Authorization: Bearer USD{API_TOKEN}" \ | jq '.installation_disk_id' 1 Replace <host_id> with the ID of the relevant host. Assign a new installation disk: Note Multipath devices are automatically discovered and listed in the host's inventory. To assign a multipath Fibre Channel disk as the installation disk, choose a disk with "drive_type" set to "Multipath" , rather than to "FC" which indicates a single path. USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ 1 -X PATCH \ -H "Content-Type: application/json" \ -H "Authorization: Bearer USD{API_TOKEN}" \ { "disks_selected_config": [ { "id": "<disk_id>", 2 "role": "install" } ] } 1 Replace <host_id> with the ID of the host. 2 Replace <disk_id> with the ID of the new installation disk. Disabling disk formatting The Assisted Installer marks all bootable disks for formatting during the installation process by default, regardless of whether or not they have been defined as the installation disk. Formatting causes data loss. You can choose to disable the formatting of a specific disk. This should be performed with caution, as bootable disks may interfere with the installation process, mainly in terms of boot order. You cannot disable formatting for the installation disk. Procedure Get the host and storage disk IDs. For details, see Viewing the storage disks . Run the following command: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ 1 -X PATCH \ -H "Content-Type: application/json" \ -H "Authorization: Bearer USD{API_TOKEN}" \ { "disks_skip_formatting": [ { "disk_id": "<disk_id>", 2 "skip_formatting": true 3 } ] } Note 1 Replace <host_id> with the ID of the host. 2 Replace <disk_id> with the ID of the disk. If there is more than one disk, separate the IDs with a comma. 3 To re-enable formatting, change the value to false . 4.11. Adding custom manifests A custom manifest is a JSON or YAML file that contains advanced configurations not currently supported in the Assisted Installer user interface. You can create a custom manifest or use one provided by a third party. To create a custom manifest with the API, use the /v2/clusters/USDCLUSTER_ID/manifests endpoint. You can upload a base64-encoded custom manifest to either the openshift folder or the manifests folder with the Assisted Installer API. There is no limit to the number of custom manifests permitted. You can only upload one base64-encoded JSON manifest at a time. However, each uploaded base64-encoded YAML file can contain multiple custom manifests. Uploading a multi-document YAML manifest is faster than adding the YAML files individually. For a file containing a single custom manifest, accepted file extensions include .yaml , .yml , or .json . Single custom manifest example { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "MachineConfig", "metadata": { "labels": { "machineconfiguration.openshift.io/role": "primary" }, "name": "10_primary_storage_config" }, "spec": { "config": { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "</dev/xxyN>", "partitions": [ { "label": "recovery", "startMiB": 32768, "sizeMiB": 16384 } ] } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/recovery", "label": "recovery", "format": "xfs" } ] } } } } For a file containing multiple custom manifests, accepted file types include .yaml or .yml . Multiple custom manifest example apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 --- apiVersion: machineconfiguration.openshift.io/v2 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-openshift-machineconfig-worker-kargs spec: kernelArguments: - loglevel=5 Note When you install OpenShift Container Platform on the Oracle Cloud Infrastructure (OCI) external platform, you must add the custom manifests provided by Oracle. For additional external partner integrations such as vSphere or Nutanix, this step is optional. For more information about custom manifests, see Additional Resources . Prerequisites You have generated a valid API_TOKEN . Tokens expire every 15 minutes. You have registered a new cluster definition and exported the cluster_id to the USDCLUSTER_ID BASH variable. Procedure Create a custom manifest file. Save the custom manifest file using the appropriate extension for the file format. Refresh the API token: USD source refresh-token Add the custom manifest to the cluster by executing the following command: USD curl -X POST "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/manifests" \ -H "Authorization: Bearer USDAPI_TOKEN" \ -H "Content-Type: application/json" \ -d '{ "file_name":"manifest.json", "folder":"manifests", "content":"'"USD(base64 -w 0 ~/manifest.json)"'" }' | jq Replace manifest.json with the name of your manifest file. The second instance of manifest.json is the path to the file. Ensure the path is correct. Example output { "file_name": "manifest.json", "folder": "manifests" } Note The base64 -w 0 command base64-encodes the manifest as a string and omits carriage returns. Encoding with carriage returns will generate an exception. Verify that the Assisted Installer added the manifest: USD curl -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/manifests/files?folder=manifests&file_name=manifest.json" -H "Authorization: Bearer USDAPI_TOKEN" Replace manifest.json with the name of your manifest file. Additional resources Manifest configuration files Multi-document YAML files 4.12. Preinstallation validations The Assisted Installer ensures the cluster meets the prerequisites before installation, because it eliminates complex postinstallation troubleshooting, thereby saving significant amounts of time and effort. Before installing the cluster, ensure the cluster and each host pass preinstallation validation. Additional resources Preinstallation validations 4.13. Installing the cluster Once the cluster hosts past validation, you can install the cluster. Prerequisites You have created a cluster and infrastructure environment. You have added hosts to the infrastructure environment. The hosts have passed validation. Procedure Refresh the API token: USD source refresh-token Install the cluster: USD curl -H "Authorization: Bearer USDAPI_TOKEN" \ -X POST \ https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/actions/install | jq Complete any postinstallation platform integration steps. Additional resources Nutanix postinstallation configuration vSphere postinstallation configuration
[ "export OFFLINE_TOKEN=<copied_token>", "echo USD{OFFLINE_TOKEN}", "export API_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )", "echo USD{API_TOKEN}", "vim ~/.local/bin/refresh-token", "export API_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )", "chmod +x ~/.local/bin/refresh-token", "source refresh-token", "curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H \"Authorization: Bearer USD{API_TOKEN}\" | jq", "{ \"release_tag\": \"v2.11.3\", \"versions\": { \"assisted-installer\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-211\", \"assisted-installer-controller\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-266\", \"assisted-installer-service\": \"quay.io/app-sre/assisted-service:78d113a\", \"discovery-agent\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-195\" } }", "{\"auths\":{\"cloud.openshift.com\":", "{\\\"auths\\\":{\\\"cloud.openshift.com\\\":", "export PULL_SECRET=USD(cat ~/Downloads/pull-secret.txt | jq -R .)", "curl https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' 1 { \"name\": \"testcluster\", \"control_plane_count\": \"3\", \"openshift_version\": \"4.11\", \"pull_secret\": USDpull_secret[0] | tojson, 2 \"base_dns_domain\": \"example.com\" } ')\"", "echo USD{PULL_SECRET}", "cat /root/.ssh/id_rsa.pub", "CLUSTER_SSHKEY=<downloaded_ssh_key>", "echo USD{CLUSTER_SSHKEY}", "source refresh-token", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.16\", \\ 1 \"control_plane_count\": \"<number>\", \\ 2 \"cpu_architecture\" : \"<architecture_name>\", \\ 3 \"base_dns_domain\": \"example.com\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "cat << EOF > cluster.json { \"name\": \"testcluster\", \"openshift_version\": \"4.16\", 1 \"control_plane_count\": \"<number>\", 2 \"base_dns_domain\": \"example.com\", \"network_type\": \"examplenetwork\", \"cluster_network_cidr\":\"11.111.1.0/14\" \"cluster_network_host_prefix\": 11, \"service_network_cidr\": \"111.11.1.0/16\", \"api_vips\":[{\"ip\": \"\"}], \"ingress_vips\": [{\"ip\": \"\"}], \"vip_dhcp_allocation\": false, \"additional_ntp_source\": \"clock.redhat.com,clock2.redhat.com\", \"ssh_public_key\": \"USDCLUSTER_SSHKEY\", \"pull_secret\": USDPULL_SECRET } EOF", "curl -s -X POST \"https://api.openshift.com/api/assisted-install/v2/clusters\" -d @./cluster.json -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.id'", "export CLUSTER_ID=<cluster_id>", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq", "curl -s -X POST https://api.openshift.com/api/assisted-install/v2/clusters -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"name\": \"testcluster\", \"openshift_version\": \"4.15\", \"cpu_architecture\" : \"x86_64\", \"base_dns_domain\": \"example.com\", \"olm_operators\": [ { \"name\": \"mce\" } 1 , { \"name\": \"odf\" } ] \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"schedulable_masters\": true 1 } ' | jq", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"ssh_public_key\": \"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDZrD4LMkAEeoU2vShhF8VM+cCZtVRgB7tqtsMxms2q3TOJZAgfuqReKYWm+OLOZTD+DO3Hn1pah/mU3u7uJfTUg4wEX0Le8zBu9xJVym0BVmSFkzHfIJVTn6SfZ81NqcalisGWkpmkKXVCdnVAX6RsbHfpGKk9YPQarmRCn5KzkelJK4hrSWpBPjdzkFXaIpf64JBZtew9XVYA3QeXkIcFuq7NBuUH9BonroPEmIXNOa41PUP1IWq3mERNgzHZiuU8Ks/pFuU5HCMvv4qbTOIhiig7vidImHPpqYT/TCkuVi5w0ZZgkkBeLnxWxH0ldrfzgFBYAxnpTU8Ih/4VhG538Ix1hxPaM6cXds2ic71mBbtbSrk+zjtNPaeYk1O7UpcCw4jjHspU/rVV/DY51D5gSiiuaFPBMucnYPgUxy4FMBFfGrmGLIzTKiLzcz0DiSz1jBeTQOX++1nz+KDLBD8CPdi5k4dq7lLkapRk85qdEvgaG5RlHMSPSS3wDrQ51fD8= user@hostname\" } ' | jq", "curl https://api.openshift.com/api/assisted-install/v2/clusters/USD{CLUSTER_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"olm_operators\": [{\"name\": \"mce\"}, {\"name\": \"cnv\"}], 1 } ' | jq '.id'", "{ <various cluster properties>, \"monitored_operators\": [ { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"console\", \"operator_type\": \"builtin\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"cvo\", \"operator_type\": \"builtin\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"mce\", \"namespace\": \"multicluster-engine\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"multicluster-engine\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"cnv\", \"namespace\": \"openshift-cnv\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"hco-operatorhub\", \"timeout_seconds\": 3600 }, { \"cluster_id\": \"b5259f97-be09-430e-b5eb-d78420ee509a\", \"name\": \"lvm\", \"namespace\": \"openshift-local-storage\", \"operator_type\": \"olm\", \"status_updated_at\": \"0001-01-01T00:00:00.000Z\", \"subscription_name\": \"local-storage-operator\", \"timeout_seconds\": 4200 } ], <more cluster properties>", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt --arg cluster_id USD{CLUSTER_ID} ' { \"name\": \"testcluster-infra-env\", \"image_type\":\"full-iso\", \"cluster_id\": USDcluster_id, \"cpu_architecture\" : \"<architecture_name>\", 1 \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq '.id'", "cat << EOF > infra-envs.json { \"name\": \"testcluster\", \"pull_secret\": USDPULL_SECRET, \"proxy\": { \"http_proxy\": \"\", \"https_proxy\": \"\", \"no_proxy\": \"\" }, \"ssh_authorized_key\": \"USDCLUSTER_SSHKEY\", \"image_type\": \"full-iso\", \"cluster_id\": \"USD{CLUSTER_ID}\", \"openshift_version\": \"4.11\" } EOF", "curl -s -X POST \"https://api.openshift.com/api/assisted-install/v2/infra-envs\" -d @./infra-envs.json -H \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.id'", "export INFRA_ENV_ID=<id>", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"image_type\":\"minimal-iso\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq", "source refresh-token", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID} -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d \"USD(jq --null-input --slurpfile pull_secret ~/Downloads/pull-secret.txt ' { \"kernel_arguments\": [{ \"operation\": \"append\", \"value\": \"<karg>=<value>\" }], 1 \"image_type\":\"minimal-iso\", \"pull_secret\": USDpull_secret[0] | tojson } ')\" | jq", "source refresh-token", "curl -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/downloads/image-url", "{ \"expires_at\": \"2024-02-07T20:20:23.000Z\", \"url\": \"https://api.openshift.com/api/assisted-images/bytoken/<TOKEN>/<OCP_VERSION>/<CPU_ARCHITECTURE>/<FULL_OR_MINIMAL_IMAGE>.iso\" }", "wget -O discovery.iso <url>", "source refresh-token", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \\ 1 -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" \"host_name\" : \"worker-1\" } ' | jq", "source refresh-token", "curl -s \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'", "\"1022623e-7689-8b2d-7fbd-e6f4d5bb28e5\"", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \\ 1 -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '.inventory | fromjson | .disks'", "[ { \"by_id\": \"/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506\", \"by_path\": \"/dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0\", \"drive_type\": \"HDD\", \"has_uuid\": true, \"hctl\": \"1:2:0:0\", \"id\": \"/dev/disk/by-id/wwn-0x6c81f660f98afb002d3adc1a1460a506\", \"installation_eligibility\": { \"eligible\": true, \"not_eligible_reasons\": null }, \"model\": \"PERC_H710P\", \"name\": \"sda\", \"path\": \"/dev/sda\", \"serial\": \"0006a560141adc3a2d00fb8af960f681\", \"size_bytes\": 6595056500736, \"vendor\": \"DELL\", \"wwn\": \"0x6c81f660f98afb002d3adc1a1460a506\" } ]", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \\ 1 -H \"Authorization: Bearer USD{API_TOKEN}\" | jq '.installation_disk_id'", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \\ 1 -X PATCH -H \"Content-Type: application/json\" -H \"Authorization: Bearer USD{API_TOKEN}\" { \"disks_selected_config\": [ { \"id\": \"<disk_id>\", 2 \"role\": \"install\" } ] }", "curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \\ 1 -X PATCH -H \"Content-Type: application/json\" -H \"Authorization: Bearer USD{API_TOKEN}\" { \"disks_skip_formatting\": [ { \"disk_id\": \"<disk_id>\", 2 \"skip_formatting\": true 3 } ] }", "{ \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"MachineConfig\", \"metadata\": { \"labels\": { \"machineconfiguration.openshift.io/role\": \"primary\" }, \"name\": \"10_primary_storage_config\" }, \"spec\": { \"config\": { \"ignition\": { \"version\": \"3.2.0\" }, \"storage\": { \"disks\": [ { \"device\": \"</dev/xxyN>\", \"partitions\": [ { \"label\": \"recovery\", \"startMiB\": 32768, \"sizeMiB\": 16384 } ] } ], \"filesystems\": [ { \"device\": \"/dev/disk/by-partlabel/recovery\", \"label\": \"recovery\", \"format\": \"xfs\" } ] } } } }", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 --- apiVersion: machineconfiguration.openshift.io/v2 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-openshift-machineconfig-worker-kargs spec: kernelArguments: - loglevel=5", "source refresh-token", "curl -X POST \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/manifests\" -H \"Authorization: Bearer USDAPI_TOKEN\" -H \"Content-Type: application/json\" -d '{ \"file_name\":\"manifest.json\", \"folder\":\"manifests\", \"content\":\"'\"USD(base64 -w 0 ~/manifest.json)\"'\" }' | jq", "{ \"file_name\": \"manifest.json\", \"folder\": \"manifests\" }", "curl -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/manifests/files?folder=manifests&file_name=manifest.json\" -H \"Authorization: Bearer USDAPI_TOKEN\"", "source refresh-token", "curl -H \"Authorization: Bearer USDAPI_TOKEN\" -X POST https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID/actions/install | jq" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_openshift_container_platform_with_the_assisted_installer/installing-with-api
Installing Red Hat Developer Hub on OpenShift Container Platform
Installing Red Hat Developer Hub on OpenShift Container Platform Red Hat Developer Hub 1.2 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/installing_red_hat_developer_hub_on_openshift_container_platform/index
Chapter 121. Google Drive Component
Chapter 121. Google Drive Component Available as of Camel version 2.14 The Google Drive component provides access to the Google Drive file storage service via the Google Drive Web APIs . Google Drive uses the OAuth 2.0 protocol for authenticating a Google account and authorizing access to user data. Before you can use this component, you will need to create an account and generate OAuth credentials . Credentials comprise of a clientId, clientSecret, and a refreshToken. A handy resource for generating a long-lived refreshToken is the OAuth playground . Maven users will need to add the following dependency to their pom.xml for this component: 121.1. URI Format The GoogleDrive Component uses the following URI format: Endpoint prefix can be one of: drive-about drive-apps drive-changes drive-channels drive-children drive-comments drive-files drive-parents drive-permissions drive-properties drive-realtime drive-replies drive-revisions 121.2. GoogleDriveComponent The Google Drive component supports 3 options, which are listed below. Name Description Default Type configuration (common) To use the shared configuration GoogleDrive Configuration clientFactory (advanced) To use the GoogleCalendarClientFactory as factory for creating the client. Will by default use BatchGoogleDriveClientFactory GoogleDriveClient Factory resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Google Drive endpoint is configured using URI syntax: with the following path and query parameters: 121.2.1. Path Parameters (2 parameters): Name Description Default Type apiName Required What kind of operation to perform GoogleDriveApiName methodName Required What sub operation to use for the selected operation String 121.2.2. Query Parameters (12 parameters): Name Description Default Type accessToken (common) OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage. String applicationName (common) Google drive application name. Example would be camel-google-drive/1.0 String clientFactory (common) To use the GoogleCalendarClientFactory as factory for creating the client. Will by default use BatchGoogleDriveClientFactory GoogleDriveClient Factory clientId (common) Client ID of the drive application String clientSecret (common) Client secret of the drive application String inBody (common) Sets the name of a parameter to be passed in the exchange In Body String refreshToken (common) OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived. String scopes (common) Specifies the level of permissions you want a drive application to have to a user account. See https://developers.google.com/drive/web/scopes for more info. List bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 121.3. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.google-drive.client-factory To use the GoogleCalendarClientFactory as factory for creating the client. Will by default use BatchGoogleDriveClientFactory. The option is a org.apache.camel.component.google.drive.GoogleDriveClientFactory type. String camel.component.google-drive.configuration.access-token OAuth 2 access token. This typically expires after an hour so refreshToken is recommended for long term usage. String camel.component.google-drive.configuration.api-name What kind of operation to perform GoogleDriveApiName camel.component.google-drive.configuration.application-name Google drive application name. Example would be camel-google-drive/1.0 String camel.component.google-drive.configuration.client-id Client ID of the drive application String camel.component.google-drive.configuration.client-secret Client secret of the drive application String camel.component.google-drive.configuration.method-name What sub operation to use for the selected operation String camel.component.google-drive.configuration.refresh-token OAuth 2 refresh token. Using this, the Google Calendar component can obtain a new accessToken whenever the current one expires - a necessity if the application is long-lived. String camel.component.google-drive.configuration.scopes Specifies the level of permissions you want a drive application to have to a user account. See https://developers.google.com/drive/web/scopes for more info. List camel.component.google-drive.enabled Enable google-drive component true Boolean camel.component.google-drive.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 121.4. Producer Endpoints Producer endpoints can use endpoint prefixes followed by endpoint names and associated options described . A shorthand alias can be used for some endpoints. The endpoint URI MUST contain a prefix. Endpoint options that are not mandatory are denoted by []. When there are no mandatory options for an endpoint, one of the set of [] options MUST be provided. Producer endpoints can also use a special option inBody that in turn should contain the name of the endpoint option whose value will be contained in the Camel Exchange In message. Any of the endpoint options can be provided in either the endpoint URI, or dynamically in a message header. The message header name must be of the format CamelGoogleDrive.<option> . Note that the inBody option overrides message header, i.e. the endpoint option inBody=option would override a CamelGoogleDrive.option header. For more information on the endpoints and options see API documentation at: https://developers.google.com/drive/v2/reference/ 121.5. Consumer Endpoints Any of the producer endpoints can be used as a consumer endpoint. Consumer endpoints can use Scheduled Poll Consumer Options with a consumer. prefix to schedule endpoint invocation. Consumer endpoints that return an array or collection will generate one exchange per element, and their routes will be executed once for each exchange. 121.6. Message Headers Any URI option can be provided in a message header for producer endpoints with a CamelGoogleDrive. prefix. 121.7. Message Body All result message bodies utilize objects provided by the underlying APIs used by the GoogleDriveComponent. Producer endpoints can specify the option name for incoming message body in the inBody endpoint URI parameter. For endpoints that return an array or collection, a consumer endpoint will map every element to distinct messages.
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-google-drive</artifactId> <version>2.14-SNAPSHOT</version> </dependency>", "google-drive://endpoint-prefix/endpoint?[options]", "google-drive:apiName/methodName" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/google-drive-component
Chapter 1. Overview
Chapter 1. Overview Red Hat Quay includes the following features: High availability Geo-replication Repository mirroring Docker v2, schema 2 (multi-arch) support Continuous integration Security scanning with Clair Custom log rotation Zero downtime garbage collection 24/7 support Red Hat Quay provides support for the following: Multiple authentication and access methods Multiple storage backends Custom certificates for Quay, Clair, and storage backends Application registries Different container image types 1.1. Architecture Red Hat Quay includes several core components, both internal and external. 1.1.1. Internal components Red Hat Quay includes the following internal components: Quay (container registry) . Runs the Quay container as a service, consisting of several components in the pod. Clair . Scans container images for vulnerabilities and suggests fixes. 1.1.2. External components Red Hat Quay includes the following external components: Database . Used by Red Hat Quay as its primary metadata storage. Note that this is not for image storage. Redis (key-value store) . Stores live builder logs and the Red Hat Quay tutorial. Also includes the locking mechanism that is required for garbage collection. Cloud storage . For supported deployments, one of the following storage types must be used: Public cloud storage . In public cloud environments, you should use the cloud provider's object storage, such as Amazon Web Services's Amazon S3 or Google Cloud's Google Cloud Storage. Private cloud storage . In private clouds, an S3 or Swift compliant Object Store is needed, such as Ceph RADOS, or OpenStack Swift. Warning Do not use "Locally mounted directory" Storage Engine for any production configurations. Mounted NFS volumes are not supported. Local storage is meant for Red Hat Quay test-only installations.
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/deploy_red_hat_quay_-_high_availability/poc-overview
Chapter 8. Uninstalling the Migration Toolkit for Virtualization
Chapter 8. Uninstalling the Migration Toolkit for Virtualization You can uninstall the Migration Toolkit for Virtualization (MTV) by using the Red Hat OpenShift web console or the command line interface (CLI). 8.1. Uninstalling MTV by using the Red Hat OpenShift web console You can uninstall Migration Toolkit for Virtualization (MTV) by using the Red Hat OpenShift web console to delete the openshift-mtv project and custom resource definitions (CRDs). Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Click Home Projects . Locate the openshift-mtv project. On the right side of the project, select Delete Project from the Options menu . In the Delete Project pane, enter the project name and click Delete . Click Administration CustomResourceDefinitions . Enter forklift in the Search field to locate the CRDs in the forklift.konveyor.io group. On the right side of each CRD, select Delete CustomResourceDefinition from the Options menu . 8.2. Uninstalling MTV from the command line interface You can uninstall Migration Toolkit for Virtualization (MTV) from the command line interface (CLI) by deleting the openshift-mtv project and the forklift.konveyor.io custom resource definitions (CRDs). Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the project: USD oc delete project openshift-mtv Delete the CRDs: USD oc get crd -o name | grep 'forklift' | xargs oc delete Delete the OAuthClient: USD oc delete oauthclient/forklift-ui
[ "oc delete project openshift-mtv", "oc get crd -o name | grep 'forklift' | xargs oc delete", "oc delete oauthclient/forklift-ui" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html/installing_and_using_the_migration_toolkit_for_virtualization/uninstalling-mtv_mtv
Chapter 1. Overview
Chapter 1. Overview From the perspective of a Ceph client, interacting with the Ceph storage cluster is remarkably simple: Connect to the Cluster Create a Pool I/O Context This remarkably simple interface is how a Ceph client selects one of the storage strategies you define. Storage strategies are invisible to the Ceph client in all but storage capacity and performance. The diagram below shows the logical data flow starting from the client into the Red Hat Ceph Storage cluster. 1.1. What are Storage Strategies? A storage strategy is a method of storing data that serves a particular use case. For example, if you need to store volumes and images for a cloud platform like OpenStack, you might choose to store data on reasonably performant SAS drives with SSD-based journals. By contrast, if you need to store object data for an S3- or Swift-compliant gateway, you might choose to use something more economical, like SATA drives. Ceph can accommodate both scenarios in the same Ceph cluster, but you need a means of providing the SAS/SSD storage strategy to the cloud platform (for example, Glance and Cinder in OpenStack), and a means of providing SATA storage for your object store. Storage strategies include the storage media (hard drives, SSDs, and the rest), the CRUSH maps that set up performance and failure domains for the storage media, the number of placement groups, and the pool interface. Ceph supports multiple storage strategies. Use cases, cost/benefit performance tradeoffs and data durability are the primary considerations that drive storage strategies. Use Cases: Ceph provides massive storage capacity, and it supports numerous use cases. For example, the Ceph Block Device client is a leading storage backend for cloud platforms like OpenStack- providing limitless storage for volumes and images with high performance features like copy-on-write cloning. By contrast, the Ceph Object Gateway client is a leading storage backend for cloud platforms that provides RESTful S3-compliant and Swift-compliant object storage for objects like audio, bitmap, video and other data. Cost/Benefit of Performance: Faster is better. Bigger is better. High durability is better. However, there is a price for each superlative quality, and a corresponding cost/benefit trade off. Consider the following use cases from a performance perspective: SSDs can provide very fast storage for relatively small amounts of data and journaling. Storing a database or object index might benefit from a pool of very fast SSDs, but prove too expensive for other data. SAS drives with SSD journaling provide fast performance at an economical price for volumes and images. SATA drives without SSD journaling provide cheap storage with lower overall performance. When you create a CRUSH hierarchy of OSDs, you need to consider the use case and an acceptable cost/performance trade off. Durability: In large scale clusters, hardware failure is an expectation, not an exception. However, data loss and service interruption remain unacceptable. For this reason, data durability is very important. Ceph addresses data durability with multiple deep copies of an object or with erasure coding and multiple coding chunks. Multiple copies or multiple coding chunks present an additional cost/benefit tradeoff: it's cheaper to store fewer copies or coding chunks, but it might lead to the inability to service write requests in a degraded state. Generally, one object with two additional copies (that is, size = 3 ) or two coding chunks might allow a cluster to service writes in a degraded state while the cluster recovers. The CRUSH algorithm aids this process by ensuring that Ceph stores additional copies or coding chunks in different locations within the cluster. This ensures that the failure of a single storage device or node doesn't lead to a loss of all of the copies or coding chunks necessary to preclude data loss. You can capture use cases, cost/benefit performance tradeoffs and data durability in a storage strategy and present it to a Ceph client as a storage pool. Important Ceph's object copies or coding chunks make RAID obsolete. Do not use RAID, because Ceph already handles data durability, a degraded RAID has a negative impact on performance, and recovering data using RAID is substantially slower than using deep copies or erasure coding chunks. 1.2. Configuring Storage Strategies Configuring storage strategies is about assigning Ceph OSDs to a CRUSH hierarchy, defining the number of placement groups for a pool, and creating a pool. The general steps are: Define a Storage Strategy: Storage strategies require you to analyze your use case, cost/benefit performance tradeoffs and data durability. Then, you create OSDs suitable for that use case. For example, you can create SSD-backed OSDs for a high performance pool; SAS drive/SSD journal-backed OSDs for high-performance block device volumes and images; or, SATA-backed OSDs for low cost storage. Ideally, each OSD for a use case should have the same hardware configuration so that you have a consistent performance profile. Define a CRUSH Hierarchy: Ceph rules select a node (usually the root ) in a CRUSH hierarchy, and identify the appropriate OSDs for storing placement groups and the objects they contain. You must create a CRUSH hierarchy and a CRUSH rule for your storage strategy. CRUSH hierarchies get assigned directly to a pool by the CRUSH rule setting. Calculate Placement Groups: Ceph shards a pool into placement groups. You need to set an appropriate number of placement groups for your pool, and remain within a healthy maximum number of placement groups in the event that you assign multiple pools to the same CRUSH rule. Create a Pool: Finally, you must create a pool and determine whether it uses replicated or erasure-coded storage. You must set the number of placement groups for the pool, the rule for the pool and the durability (size or K+M coding chunks). Remember, the pool is the Ceph client's interface to the storage cluster, but the storage strategy is completely transparent to the Ceph client (except for capacity and performance).
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/storage_strategies_guide/overview
18.7.2. Routed Mode
18.7.2. Routed Mode This section provides information about routed mode. DMZ Consider a network where one or more nodes are placed in a controlled subnetwork for security reasons. The deployment of a special subnetwork such as this is a common practice, and the subnetwork is known as a DMZ. Refer to the following diagram for more details on this layout: Figure 18.8. Sample DMZ configuration Host physical machines in a DMZ typically provide services to WAN (external) host physical machines as well as LAN (internal) host physical machines. As this requires them to be accessible from multiple locations, and considering that these locations are controlled and operated in different ways based on their security and trust level, routed mode is the best configuration for this environment. Virtual Server Hosting Consider a virtual server hosting company that has several host physical machines, each with two physical network connections. One interface is used for management and accounting, the other is for the virtual machines to connect through. Each guest has its own public IP address, but the host physical machines use private IP address as management of the guests can only be performed by internal administrators. Refer to the following diagram to understand this scenario: Figure 18.9. Virtual server hosting sample configuration
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-routed-mode
5.8. Adding a Cluster Service to the Cluster
5.8. Adding a Cluster Service to the Cluster To add a cluster service to the cluster, follow these steps: At the left frame, click Services . At the bottom of the right frame (labeled Properties ), click the Create a Service button. Clicking Create a Service causes the Add a Service dialog box to be displayed. At the Add a Service dialog box, type the name of the service in the Name text box and click OK . Clicking OK causes the Service Management dialog box to be displayed (refer to Figure 5.12, "Adding a Cluster Service" ). Note Use a descriptive name that clearly distinguishes the service from other services in the cluster. Figure 5.12. Adding a Cluster Service If you want to restrict the members on which this cluster service is able to run, choose a failover domain from the Failover Domain drop-down box. (Refer to Section 5.6, "Configuring a Failover Domain" for instructions on how to configure a failover domain.) Autostart This Service checkbox - This is checked by default. If Autostart This Service is checked, the service is started automatically when a cluster is started and running. If Autostart This Service is not checked, the service must be started manually any time the cluster comes up from stopped state. Run Exclusive checkbox - This sets a policy wherein the service only runs on nodes that have no other services running on them. For example, for a very busy web server that is clustered for high availability, it would would be advisable to keep that service on a node alone with no other services competing for his resources - that is, Run Exclusive checked. On the other hand, services that consume few resources (like NFS and Samba), can run together on the same node without little concern over contention for resources. For those types of services you can leave the Run Exclusive unchecked. Note Circumstances that require enabling Run Exclusive are rare. Enabling Run Exclusive can render a service offline if the node it is running on fails and no other nodes are empty. Select a recovery policy to specify how the resource manager should recover from a service failure. At the upper right of the Service Management dialog box, there are three Recovery Policy options available: Restart - Restart the service in the node the service is currently located. The default setting is Restart . If the service cannot be restarted in the the current node, the service is relocated. Relocate - Relocate the service before restarting. Do not restart the node where the service is currently located. Disable - Do not restart the service at all. Click the Add a Shared Resource to this service button and choose the a resource listed that you have configured in Section 5.7, "Adding Cluster Resources" . Note If you are adding a Samba-service resource, connect a Samba-service resource directly to the service, not to a resource within a service. That is, at the Service Management dialog box, use either Create a new resource for this service or Add a Shared Resource to this service ; do not use Attach a new Private Resource to the Selection or Attach a Shared Resource to the selection . If needed, you may also create a private resource that you can create that becomes a subordinate resource by clicking on the Attach a new Private Resource to the Selection button. The process is the same as creating a shared resource described in Section 5.7, "Adding Cluster Resources" . The private resource will appear as a child to the shared resource to which you associated with the shared resource. Click the triangle icon to the shared resource to display any private resources associated. When finished, click OK . Choose File => Save to save the changes to the cluster configuration. Note To verify the existence of the IP service resource used in a cluster service, you must use the /sbin/ip addr list command on a cluster node. The following output shows the /sbin/ip addr list command executed on a node running a cluster service:
[ "1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP> mtu 1356 qdisc pfifo_fast qlen 1000 link/ether 00:05:5d:9a:d8:91 brd ff:ff:ff:ff:ff:ff inet 10.11.4.31/22 brd 10.11.7.255 scope global eth0 inet6 fe80::205:5dff:fe9a:d891/64 scope link inet 10.11.4.240/22 scope global secondary eth0 valid_lft forever preferred_lft forever" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-add-service-ca
Chapter 6. Installing a cluster on Alibaba Cloud into an existing VPC
Chapter 6. Installing a cluster on Alibaba Cloud into an existing VPC In OpenShift Container Platform version 4.15, you can install a cluster into an existing Alibaba Virtual Private Cloud (VPC) on Alibaba Cloud Services. The installation program provisions the required infrastructure, which can then be customized. To customize the VPC installation, modify the parameters in the 'install-config.yaml' file before you install the cluster. Note The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You registered your domain . If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain Resource Access Management (RAM) credentials . 6.2. Using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster into existing subnets in an existing Virtual Private Cloud (VPC) in the Alibaba Cloud Platform. By deploying OpenShift Container Platform into an existing Alibaba VPC, you can avoid limit constraints in new accounts and more easily adhere to your organization's operational constraints. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. You must configure networking using vSwitches. 6.2.1. Requirements for using your VPC The union of the VPC CIDR block and the machine network CIDR must be non-empty. The vSwitches must be within the machine network. The installation program does not create the following components: VPC vSwitches Route table NAT gateway Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 6.2.2. VPC validation To ensure that the vSwitches you provide are suitable, the installation program confirms the following data: All the vSwitches that you specify must exist. You have provided one or more vSwitches for control plane machines and compute machines. The vSwitches' CIDRs belong to the machine CIDR that you specified. 6.2.3. Division of permissions Some individuals can create different resources in your cloud than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components, such as VPCs or vSwitches. 6.2.4. Isolation between clusters If you deploy OpenShift Container Platform into an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed to the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 6.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.4. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.5.1. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Alibaba Cloud. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select alibabacloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Provide a descriptive name for your cluster. Installing the cluster into Alibaba Cloud requires that the Cloud Credential Operator (CCO) operate in manual mode. Modify the install-config.yaml file to set the credentialsMode parameter to Manual : Example install-config.yaml configuration file with credentialsMode set to Manual apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 Add this line to set the credentialsMode to Manual . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Alibaba Cloud 6.5.2. Sample customized install-config.yaml file for Alibaba Cloud You can customize the installation configuration file ( install-config.yaml ) to specify more details about your cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{"auths": {"cloud.openshift.com": {"auth": ... }' 8 sshKey: | ssh-rsa AAAA... 9 1 Required. The installation program prompts you for a cluster name. 2 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 3 Optional. Specify parameters for machine pools that do not define their own platform configuration. 4 Required. The installation program prompts you for the region to deploy the cluster to. 5 Optional. Specify an existing resource group where the cluster should be installed. 8 Required. The installation program prompts you for the pull secret. 9 Optional. The installation program prompts you for the SSH key value that you use to access the machines in your cluster. 6 7 Optional. These are example vswitchID values. 6.5.3. Generating the required installation manifests You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the directory in which the installation program creates files. 6.5.4. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.5.5. Creating credentials for OpenShift Container Platform components with the ccoctl tool You can use the OpenShift Container Platform Cloud Credential Operator (CCO) utility to automate the creation of Alibaba Cloud RAM users and policies for each in-cluster component. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Created a RAM user with sufficient permission to create the OpenShift Container Platform cluster. Added the AccessKeyID ( access_key_id ) and AccessKeySecret ( access_key_secret ) of that RAM user into the ~/.alibabacloud/credentials file on your local computer. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: Run the following command to use the tool: USD ccoctl alibabacloud create-ram-users \ --name <name> \ 1 --region=<alibaba_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> 4 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the Alibaba Cloud region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Specify the directory where the generated component credentials secrets will be placed. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Example output 2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml ... Note A RAM user can have up to two AccessKeys at the same time. If you run ccoctl alibabacloud create-ram-users more than twice, the previously generated manifests secret becomes stale and you must reapply the newly generated secrets. Verify that the OpenShift Container Platform secrets are created: USD ls <path_to_ccoctl_output_dir>/manifests Example output openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml You can verify that the RAM users and policies are created by querying Alibaba Cloud. For more information, refer to Alibaba Cloud documentation on listing RAM users and policies. Copy the generated credential files to the target manifests directory: USD cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/ where: <path_to_ccoctl_output_dir> Specifies the directory created by the ccoctl alibabacloud create-ram-users command. <path_to_installation_dir> Specifies the directory in which the installation program creates files. 6.6. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.8. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.9. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 6.10. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console 6.11. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled", "apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{\"auths\": {\"cloud.openshift.com\": {\"auth\": ... }' 8 sshKey: | ssh-rsa AAAA... 9", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "ccoctl alibabacloud create-ram-users --name <name> \\ 1 --region=<alibaba_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> 4", "2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml", "ls <path_to_ccoctl_output_dir>/manifests", "openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml", "cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_alibaba/installing-alibaba-vpc
Chapter 6. View OpenShift Data Foundation Topology
Chapter 6. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_z/viewing-odf-topology_mcg-verify
Chapter 1. Overview
Chapter 1. Overview The Performance Tuning Guide is a comprehensive reference on the configuration and optimization of Red Hat Enterprise Linux. While this release also contains information on Red Hat Enterprise Linux 5 performance capabilities, all instructions supplied herein are specific to Red Hat Enterprise Linux 6. 1.1. How to read this book This book is divided into chapters discussing specific subsystems in Red Hat Enterprise Linux. The Performance Tuning Guide focuses on three major themes per subsystem: Features Each subsystem chapter describes performance features unique to (or implemented differently in) Red Hat Enterprise Linux 6. These chapters also discuss Red Hat Enterprise Linux 6 updates that significantly improved the performance of specific subsystems over Red Hat Enterprise Linux 5. Analysis The book also enumerates performance indicators for each specific subsystem. Typical values for these indicators are described in the context of specific services, helping you understand their significance in real-world, production systems. In addition, the Performance Tuning Guide also shows different ways of retrieving performance data (that is, profiling) for a subsystem. Note that some of the profiling tools showcased here are documented elsewhere with more detail. Configuration Perhaps the most important information in this book are instructions on how to adjust the performance of a specific subsystem in Red Hat Enterprise Linux 6. The Performance Tuning Guide explains how to fine-tune a Red Hat Enterprise Linux 6 subsystem for specific services. Keep in mind that tweaking a specific subsystem's performance may affect the performance of another, sometimes adversely. The default configuration of Red Hat Enterprise Linux 6 is optimal for most services running under moderate loads. The procedures enumerated in the Performance Tuning Guide were tested extensively by Red Hat engineers in both lab and field. However, Red Hat recommends that you properly test all planned configurations in a secure testing environment before applying it to your production servers. You should also back up all data and configuration information before you start tuning your system. 1.1.1. Audience This book is suitable for two types of readers: System/Business Analyst This book enumerates and explains Red Hat Enterprise Linux 6 performance features at a high level, providing enough information on how subsystems perform for specific workloads (both by default and when optimized). The level of detail used in describing Red Hat Enterprise Linux 6 performance features helps potential customers and sales engineers understand the suitability of this platform in providing resource-intensive services at an acceptable level. The Performance Tuning Guide also provides links to more detailed documentation on each feature whenever possible. At that detail level, readers can understand these performance features enough to form a high-level strategy in deploying and optimizing Red Hat Enterprise Linux 6. This allows readers to both develop and evaluate infrastructure proposals. This feature-focused level of documentation is suitable for readers with a high-level understanding of Linux subsystems and enterprise-level networks. System Administrator The procedures enumerated in this book are suitable for system administrators with RHCE [1] skill level (or its equivalent, that is, 3-5 years experience in deploying and managing Linux). The Performance Tuning Guide aims to provide as much detail as possible about the effects of each configuration; this means describing any performance trade-offs that may occur. The underlying skill in performance tuning lies not in knowing how to analyze and tune a subsystem. Rather, a system administrator adept at performance tuning knows how to balance and optimize a Red Hat Enterprise Linux 6 system for a specific purpose . This means also knowing which trade-offs and performance penalties are acceptable when attempting to implement a configuration designed to boost a specific subsystem's performance. [1] Red Hat Certified Engineer. For more information, refer to http://www.redhat.com/training/certifications/rhce/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/ch-intro
Chapter 18. Migrating a standalone Red Hat Quay deployment to a Red Hat Quay Operator deployment
Chapter 18. Migrating a standalone Red Hat Quay deployment to a Red Hat Quay Operator deployment The following procedures allow you to back up a standalone Red Hat Quay deployment and migrate it to the Red Hat Quay Operator on OpenShift Container Platform. 18.1. Backing up a standalone deployment of Red Hat Quay Procedure Back up the config.yaml of your standalone Red Hat Quay deployment: USD mkdir /tmp/quay-backup USD cp /path/to/Quay/config/directory/config.yaml /tmp/quay-backup Create a backup of the database that your standalone Red Hat Quay deployment is using: USD pg_dump -h DB_HOST -p 5432 -d QUAY_DATABASE_NAME -U QUAY_DATABASE_USER -W -O > /tmp/quay-backup/quay-database-backup.sql Install the AWS CLI if you do not have it already. Create an ~/.aws/ directory: USD mkdir ~/.aws/ Obtain the access_key and secret_key from the config.yaml of your standalone deployment: USD grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/config.yaml Example output: DISTRIBUTED_STORAGE_CONFIG: minio-1: - RadosGWStorage - access_key: ########## bucket_name: quay hostname: 172.24.10.50 is_secure: false port: "9000" secret_key: ########## storage_path: /datastorage/registry Store the access_key and secret_key from the config.yaml file in your ~/.aws directory: USD touch ~/.aws/credentials Optional: Check that your access_key and secret_key are stored: USD cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF Example output: aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG Note If the aws cli does not automatically collect the access_key and secret_key from the `~/.aws/credentials file , you can, you can configure these by running aws configure and manually inputting the credentials. In your quay-backup directory, create a bucket_backup directory: USD mkdir /tmp/quay-backup/bucket-backup Backup all blobs from the S3 storage: USD aws s3 sync --no-verify-ssl --endpoint-url https://PUBLIC_S3_ENDPOINT:PORT s3://QUAY_BUCKET/ /tmp/quay-backup/bucket-backup/ Note The PUBLIC_S3_ENDPOINT can be read from the Red Hat Quay config.yaml file under hostname in the DISTRIBUTED_STORAGE_CONFIG . If the endpoint is insecure, use http instead of https in the endpoint URL. Up to this point, you should have a complete backup of all Red Hat Quay data, blobs, the database, and the config.yaml file stored locally. In the following section, you will migrate the standalone deployment backup to Red Hat Quay on OpenShift Container Platform. 18.2. Using backed up standalone content to migrate to OpenShift Container Platform. Prerequisites Your standalone Red Hat Quay data, blobs, database, and config.yaml have been backed up. Red Hat Quay is deployed on OpenShift Container Platform using the Red Hat Quay Operator. A QuayRegistry with all components set to managed . Procedure The procedure in this documents uses the following namespace: quay-enterprise . Scale down the Red Hat Quay Operator: USD oc scale --replicas=0 deployment quay-operator.v3.6.2 -n openshift-operators Scale down the application and mirror deployments: USD oc scale --replicas=0 deployment QUAY_MAIN_APP_DEPLOYMENT QUAY_MIRROR_DEPLOYMENT Copy the database SQL backup to the Quay PostgreSQL database instance: USD oc cp /tmp/user/quay-backup/quay-database-backup.sql quay-enterprise/quayregistry-quay-database-54956cdd54-p7b2w:/var/lib/pgsql/data/userdata Obtain the database password from the Operator-created config.yaml file: USD oc get deployment quay-quay-app -o json | jq '.spec.template.spec.volumes[].projected.sources' | grep -i config-secret Example output: "name": "QUAY_CONFIG_SECRET_NAME" USD oc get secret quay-quay-config-secret-9t77hb84tb -o json | jq '.data."config.yaml"' | cut -d '"' -f2 | base64 -d -w0 > /tmp/quay-backup/operator-quay-config-yaml-backup.yaml cat /tmp/quay-backup/operator-quay-config-yaml-backup.yaml | grep -i DB_URI Example output: Execute a shell inside of the database pod: # oc exec -it quay-postgresql-database-pod -- /bin/bash Enter psql: bash-4.4USD psql Drop the database: postgres=# DROP DATABASE "example-restore-registry-quay-database"; Example output: Create a new database and set the owner as the same name: postgres=# CREATE DATABASE "example-restore-registry-quay-database" OWNER "example-restore-registry-quay-database"; Example output: Connect to the database: postgres=# \c "example-restore-registry-quay-database"; Example output: You are now connected to database "example-restore-registry-quay-database" as user "postgres". Create a pg_trmg extension of your Quay database: example-restore-registry-quay-database=# create extension pg_trgm ; Example output: CREATE EXTENSION Exit the postgres CLI to re-enter bash-4.4: \q Set the password for your PostgreSQL deployment: bash-4.4USD psql -h localhost -d "QUAY_DATABASE_NAME" -U QUAY_DATABASE_OWNER -W < /var/lib/pgsql/data/userdata/quay-database-backup.sql Example output: Exit bash mode: bash-4.4USD exit Create a new configuration bundle for the Red Hat Quay Operator. USD touch config-bundle.yaml In your new config-bundle.yaml , include all of the information that the registry requires, such as LDAP configuration, keys, and other modifications that your old registry had. Run the following command to move the secret_key to your config-bundle.yaml : USD cat /tmp/quay-backup/config.yaml | grep SECRET_KEY > /tmp/quay-backup/config-bundle.yaml Note You must manually copy all the LDAP, OIDC and other information and add it to the /tmp/quay-backup/config-bundle.yaml file. Create a configuration bundle secret inside of your OpenShift cluster: USD oc create secret generic new-custom-config-bundle --from-file=config.yaml=/tmp/quay-backup/config-bundle.yaml Scale up the Quay pods: Scale up the mirror pods: Patch the QuayRegistry CRD so that it contains the reference to the new custom configuration bundle: Note If Red Hat Quay returns a 500 internal server error, you might have to update the location of your DISTRIBUTED_STORAGE_CONFIG to default . Create a new AWS credentials.yaml in your /.aws/ directory and include the access_key and secret_key from the Operator-created config.yaml file: USD touch credentials.yaml USD grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/operator-quay-config-yaml-backup.yaml USD cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF Note If the aws cli does not automatically collect the access_key and secret_key from the `~/.aws/credentials file , you can configure these by running aws configure and manually inputting the credentials. Record the NooBaa's publicly available endpoint: USD oc get route s3 -n openshift-storage -o yaml -o jsonpath="{.spec.host}{'\n'}" Sync the backup data to the NooBaa backend storage: USD aws s3 sync --no-verify-ssl --endpoint-url https://NOOBAA_PUBLIC_S3_ROUTE /tmp/quay-backup/bucket-backup/* s3://QUAY_DATASTORE_BUCKET_NAME Scale the Operator back up to 1 pod: USD oc scale -replicas=1 deployment quay-operator.v3.6.4 -n openshift-operators The Operator uses the custom configuration bundle provided and reconciles all secrets and deployments. Your new Red Hat Quay deployment on OpenShift Container Platform should contain all of the information that the old deployment had. You should be able to pull all images.
[ "mkdir /tmp/quay-backup cp /path/to/Quay/config/directory/config.yaml /tmp/quay-backup", "pg_dump -h DB_HOST -p 5432 -d QUAY_DATABASE_NAME -U QUAY_DATABASE_USER -W -O > /tmp/quay-backup/quay-database-backup.sql", "mkdir ~/.aws/", "grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/config.yaml", "DISTRIBUTED_STORAGE_CONFIG: minio-1: - RadosGWStorage - access_key: ########## bucket_name: quay hostname: 172.24.10.50 is_secure: false port: \"9000\" secret_key: ########## storage_path: /datastorage/registry", "touch ~/.aws/credentials", "cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF", "aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG", "mkdir /tmp/quay-backup/bucket-backup", "aws s3 sync --no-verify-ssl --endpoint-url https://PUBLIC_S3_ENDPOINT:PORT s3://QUAY_BUCKET/ /tmp/quay-backup/bucket-backup/", "oc scale --replicas=0 deployment quay-operator.v3.6.2 -n openshift-operators", "oc scale --replicas=0 deployment QUAY_MAIN_APP_DEPLOYMENT QUAY_MIRROR_DEPLOYMENT", "oc cp /tmp/user/quay-backup/quay-database-backup.sql quay-enterprise/quayregistry-quay-database-54956cdd54-p7b2w:/var/lib/pgsql/data/userdata", "oc get deployment quay-quay-app -o json | jq '.spec.template.spec.volumes[].projected.sources' | grep -i config-secret", "\"name\": \"QUAY_CONFIG_SECRET_NAME\"", "oc get secret quay-quay-config-secret-9t77hb84tb -o json | jq '.data.\"config.yaml\"' | cut -d '\"' -f2 | base64 -d -w0 > /tmp/quay-backup/operator-quay-config-yaml-backup.yaml", "cat /tmp/quay-backup/operator-quay-config-yaml-backup.yaml | grep -i DB_URI", "postgresql://QUAY_DATABASE_OWNER:PASSWORD@DATABASE_HOST/QUAY_DATABASE_NAME", "oc exec -it quay-postgresql-database-pod -- /bin/bash", "bash-4.4USD psql", "postgres=# DROP DATABASE \"example-restore-registry-quay-database\";", "DROP DATABASE", "postgres=# CREATE DATABASE \"example-restore-registry-quay-database\" OWNER \"example-restore-registry-quay-database\";", "CREATE DATABASE", "postgres=# \\c \"example-restore-registry-quay-database\";", "You are now connected to database \"example-restore-registry-quay-database\" as user \"postgres\".", "example-restore-registry-quay-database=# create extension pg_trgm ;", "CREATE EXTENSION", "\\q", "bash-4.4USD psql -h localhost -d \"QUAY_DATABASE_NAME\" -U QUAY_DATABASE_OWNER -W < /var/lib/pgsql/data/userdata/quay-database-backup.sql", "SET SET SET SET SET", "bash-4.4USD exit", "touch config-bundle.yaml", "cat /tmp/quay-backup/config.yaml | grep SECRET_KEY > /tmp/quay-backup/config-bundle.yaml", "oc create secret generic new-custom-config-bundle --from-file=config.yaml=/tmp/quay-backup/config-bundle.yaml", "oc scale --replicas=1 deployment quayregistry-quay-app deployment.apps/quayregistry-quay-app scaled", "oc scale --replicas=1 deployment quayregistry-quay-mirror deployment.apps/quayregistry-quay-mirror scaled", "oc patch quayregistry QUAY_REGISTRY_NAME --type=merge -p '{\"spec\":{\"configBundleSecret\":\"new-custom-config-bundle\"}}'", "touch credentials.yaml", "grep -i DISTRIBUTED_STORAGE_CONFIG -A10 /tmp/quay-backup/operator-quay-config-yaml-backup.yaml", "cat > ~/.aws/credentials << EOF [default] aws_access_key_id = ACCESS_KEY_FROM_QUAY_CONFIG aws_secret_access_key = SECRET_KEY_FROM_QUAY_CONFIG EOF", "oc get route s3 -n openshift-storage -o yaml -o jsonpath=\"{.spec.host}{'\\n'}\"", "aws s3 sync --no-verify-ssl --endpoint-url https://NOOBAA_PUBLIC_S3_ROUTE /tmp/quay-backup/bucket-backup/* s3://QUAY_DATASTORE_BUCKET_NAME", "oc scale -replicas=1 deployment quay-operator.v3.6.4 -n openshift-operators" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.12/html/manage_red_hat_quay/migrating-standalone-quay-to-operator
Chapter 84. Using Ansible to manage IdM user vaults: storing and retrieving secrets
Chapter 84. Using Ansible to manage IdM user vaults: storing and retrieving secrets This chapter describes how to manage user vaults in Identity Management using the Ansible vault module. Specifically, it describes how a user can use Ansible playbooks to perform the following three consecutive actions: Create a user vault in IdM . Store a secret in the vault . Retrieve a secret from the vault . The user can do the storing and the retrieving from two different IdM clients. Prerequisites The Key Recovery Authority (KRA) Certificate System component has been installed on one or more of the servers in your IdM domain. For details, see Installing the Key Recovery Authority in IdM . 84.1. Ensuring the presence of a standard user vault in IdM using Ansible Follow this procedure to use an Ansible playbook to create a vault container with one or more private vaults to securely store sensitive information. In the example used in the procedure below, the idm_user user creates a vault of the standard type named my_vault . The standard vault type ensures that idm_user will not be required to authenticate when accessing the file. idm_user will be able to retrieve the file from any IdM client to which the user is logged in. Prerequisites You have installed the ansible-freeipa package on the Ansible controller, that is the host on which you execute the steps in the procedure. You know the password of idm_user . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/vault directory: Create an inventory file, for example inventory.file : Open inventory.file and define the IdM server that you want to configure in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the ensure-standard-vault-is-present.yml Ansible playbook file. For example: Open the ensure-standard-vault-is-present-copy.yml file for editing. Adapt the file by setting the following variables in the ipavault task section: Set the ipaadmin_principal variable to idm_user . Set the ipaadmin_password variable to the password of idm_user . Set the user variable to idm_user . Set the name variable to my_vault . Set the vault_type variable to standard . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: 84.2. Archiving a secret in a standard user vault in IdM using Ansible Follow this procedure to use an Ansible playbook to store sensitive information in a personal vault. In the example used, the idm_user user archives a file with sensitive information named password.txt in a vault named my_vault . Prerequisites You have installed the ansible-freeipa package on the Ansible controller, that is the host on which you execute the steps in the procedure. You know the password of idm_user . idm_user is the owner, or at least a member user of my_vault . You have access to password.txt , the secret that you want to archive in my_vault . Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/vault directory: Open your inventory file and make sure that the IdM server that you want to configure is listed in the [ipaserver] section. For example, to instruct Ansible to configure server.idm.example.com , enter: Make a copy of the data-archive-in-symmetric-vault.yml Ansible playbook file but replace "symmetric" by "standard". For example: Open the data-archive-in-standard-vault-copy.yml file for editing. Adapt the file by setting the following variables in the ipavault task section: Set the ipaadmin_principal variable to idm_user . Set the ipaadmin_password variable to the password of idm_user . Set the user variable to idm_user . Set the name variable to my_vault . Set the in variable to the full path to the file with sensitive information. Set the action variable to member . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: 84.3. Retrieving a secret from a standard user vault in IdM using Ansible Follow this procedure to use an Ansible playbook to retrieve a secret from the user personal vault. In the example used in the procedure below, the idm_user user retrieves a file with sensitive data from a vault of the standard type named my_vault onto an IdM client named host01 . idm_user does not have to authenticate when accessing the file. idm_user can use Ansible to retrieve the file from any IdM client on which Ansible is installed. Prerequisites You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. You know the password of idm_user . idm_user is the owner of my_vault . idm_user has stored a secret in my_vault . Ansible can write into the directory on the IdM host into which you want to retrieve the secret. idm_user can read from the directory on the IdM host into which you want to retrieve the secret. Procedure Navigate to the /usr/share/doc/ansible-freeipa/playbooks/vault directory: Open your inventory file and mention, in a clearly defined section, the IdM client onto which you want to retrieve the secret. For example, to instruct Ansible to retrieve the secret onto host01.idm.example.com , enter: Make a copy of the retrive-data-symmetric-vault.yml Ansible playbook file. Replace "symmetric" with "standard". For example: Open the retrieve-data-standard-vault.yml-copy.yml file for editing. Adapt the file by setting the hosts variable to ipahost . Adapt the file by setting the following variables in the ipavault task section: Set the ipaadmin_principal variable to idm_user . Set the ipaadmin_password variable to the password of idm_user . Set the user variable to idm_user . Set the name variable to my_vault . Set the out variable to the full path of the file into which you want to export the secret. Set the state variable to retrieved . This the modified Ansible playbook file for the current example: Save the file. Run the playbook: Verification SSH to host01 as user01 : View the file specified by the out variable in the Ansible playbook file: You can now see the exported secret. For more information about using Ansible to manage IdM vaults and user secrets and about playbook variables, see the README-vault.md Markdown file available in the /usr/share/doc/ansible-freeipa/ directory and the sample playbooks available in the /usr/share/doc/ansible-freeipa/playbooks/vault/ directory.
[ "cd /usr/share/doc/ansible-freeipa/playbooks/vault", "touch inventory.file", "[ipaserver] server.idm.example.com", "cp ensure-standard-vault-is-present.yml ensure-standard-vault-is-present-copy.yml", "--- - name: Tests hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - ipavault: ipaadmin_principal: idm_user ipaadmin_password: idm_user_password user: idm_user name: my_vault vault_type: standard", "ansible-playbook --vault-password-file=password_file -v -i inventory.file ensure-standard-vault-is-present-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/vault", "[ipaserver] server.idm.example.com", "cp data-archive-in-symmetric-vault.yml data-archive-in-standard-vault-copy.yml", "--- - name: Tests hosts: ipaserver gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - ipavault: ipaadmin_principal: idm_user ipaadmin_password: idm_user_password user: idm_user name: my_vault in: /usr/share/doc/ansible-freeipa/playbooks/vault/password.txt action: member", "ansible-playbook --vault-password-file=password_file -v -i inventory.file data-archive-in-standard-vault-copy.yml", "cd /usr/share/doc/ansible-freeipa/playbooks/vault", "[ipahost] host01.idm.example.com", "cp retrive-data-symmetric-vault.yml retrieve-data-standard-vault.yml-copy.yml", "--- - name: Tests hosts: ipahost gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - ipavault: ipaadmin_principal: idm_user ipaadmin_password: idm_user_password user: idm_user name: my_vault out: /tmp/password_exported.txt state: retrieved", "ansible-playbook --vault-password-file=password_file -v -i inventory.file retrieve-data-standard-vault.yml-copy.yml", "ssh [email protected]", "vim /tmp/password_exported.txt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/using-ansible-to-manage-idm-user-vaults-storing-and-retrieving-secrets_configuring-and-managing-idm
Chapter 2. Overview
Chapter 2. Overview .NET is a general purpose, modular, cross-platform, and open source implementation of .NET that features automatic memory management and modern programming languages. It allows users to build high-quality applications efficiently. .NET is available on RHEL 7, RHEL 8, RHEL 9. .NET 6.0 is a Long Term Support (LTS) release. LTS releases are generally supported for around 3 years. For more information, see the Life Cycle and Support Policies for the .NET Program . .NET offers: The ability to follow a microservices-based approach, where some components are built with .NET and others with Java, but all can run on a common, supported platform in RHEL. The capacity to more easily develop new .NET workloads on Microsoft Windows. You can deploy and run on either RHEL or Windows Server. A heterogeneous data center, where the underlying infrastructure is capable of running .NET applications without having to rely solely on Windows Server.
null
https://docs.redhat.com/en/documentation/net/6.0/html/release_notes_for_.net_6.0_rpm_packages/dotnet-overview_release-notes-for-dotnet-rpms
Red Hat Developer Hub support
Red Hat Developer Hub support If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal . You can use the Red Hat Customer Portal for the following purposes: To search or browse through the Red Hat Knowledgebase of technical support articles about Red Hat products. To create a support case for Red Hat Global Support Services (GSS). For support case creation, select Red Hat Developer Hub as the product and select the appropriate product version.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/release_notes_for_red_hat_developer_hub_1.2/snip-customer-support-info_release-notes-rhdh
Chapter 6. Updating Logging
Chapter 6. Updating Logging There are two types of logging updates: minor release updates (5.y.z) and major release updates (5.y). 6.1. Minor release updates If you installed the logging Operators using the Automatic update approval option, your Operators receive minor version updates automatically. You do not need to complete any manual update steps. If you installed the logging Operators using the Manual update approval option, you must manually approve minor version updates. For more information, see Manually approving a pending Operator update . 6.2. Major release updates For major version updates you must complete some manual steps. For major release version compatibility and support information, see OpenShift Operator Life Cycles . 6.3. Upgrading the Red Hat OpenShift Logging Operator to watch all namespaces In logging 5.7 and older versions, the Red Hat OpenShift Logging Operator only watches the openshift-logging namespace. If you want the Red Hat OpenShift Logging Operator to watch all namespaces on your cluster, you must redeploy the Operator. You can complete the following procedure to redeploy the Operator without deleting your logging components. Prerequisites You have installed the OpenShift CLI ( oc ). You have administrator permissions. Procedure Delete the subscription by running the following command: USD oc -n openshift-logging delete subscription <subscription> Delete the Operator group by running the following command: USD oc -n openshift-logging delete operatorgroup <operator_group_name> Delete the cluster service version (CSV) by running the following command: USD oc delete clusterserviceversion cluster-logging.<version> Redeploy the Red Hat OpenShift Logging Operator by following the "Installing Logging" documentation. Verification Check that the targetNamespaces field in the OperatorGroup resource is not present or is set to an empty string. To do this, run the following command and inspect the output: USD oc get operatorgroup <operator_group_name> -o yaml Example output apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - "" # ... 6.4. Updating the Red Hat OpenShift Logging Operator To update the Red Hat OpenShift Logging Operator to a new major release version, you must modify the update channel for the Operator subscription. Prerequisites You have installed the Red Hat OpenShift Logging Operator. You have administrator permissions. You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective. Procedure Navigate to Operators Installed Operators . Select the openshift-logging project. Click the Red Hat OpenShift Logging Operator. Click Subscription . In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y , depending on your current update channel. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.y , and click Save . Note the cluster-logging.v5.y.z version. Verification Wait for a few seconds, then click Operators Installed Operators . Verify that the Red Hat OpenShift Logging Operator version matches the latest cluster-logging.v5.y.z version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . 6.5. Updating the Loki Operator To update the Loki Operator to a new major release version, you must modify the update channel for the Operator subscription. Prerequisites You have installed the Loki Operator. You have administrator permissions. You have access to the OpenShift Container Platform web console and are viewing the Administrator perspective. Procedure Navigate to Operators Installed Operators . Select the openshift-operators-redhat project. Click the Loki Operator . Click Subscription . In the Subscription details section, click the Update channel link. This link text might be stable or stable-5.y , depending on your current update channel. In the Change Subscription Update Channel window, select the latest major version update channel, stable-5.y , and click Save . Note the loki-operator.v5.y.z version. Verification Wait for a few seconds, then click Operators Installed Operators . Verify that the Loki Operator version matches the latest loki-operator.v5.y.z version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . 6.6. Updating the OpenShift Elasticsearch Operator To update the OpenShift Elasticsearch Operator to the current version, you must modify the subscription. Note The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. If you currently use the OpenShift Elasticsearch Operator released with Logging 5.8, it will continue to work with Logging until the EOL of Logging 5.8. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . Prerequisites If you are using Elasticsearch as the default log store, and Kibana as the UI, update the OpenShift Elasticsearch Operator before you update the Red Hat OpenShift Logging Operator. Important If you update the Operators in the wrong order, Kibana does not update and the Kibana custom resource (CR) is not created. To fix this issue, delete the Red Hat OpenShift Logging Operator pod. When the Red Hat OpenShift Logging Operator pod redeploys, it creates the Kibana CR and Kibana becomes available again. The Logging status is healthy: All pods have a ready status. The Elasticsearch cluster is healthy. Your Elasticsearch and Kibana data is backed up . You have administrator permissions. You have installed the OpenShift CLI ( oc ) for the verification steps. Procedure In the OpenShift Container Platform web console, click Operators Installed Operators . Select the openshift-operators-redhat project. Click OpenShift Elasticsearch Operator . Click Subscription Channel . In the Change Subscription Update Channel window, select stable-5.y and click Save . Note the elasticsearch-operator.v5.y.z version. Wait for a few seconds, then click Operators Installed Operators . Verify that the OpenShift Elasticsearch Operator version matches the latest elasticsearch-operator.v5.y.z version. On the Operators Installed Operators page, wait for the Status field to report Succeeded . Verification Verify that all Elasticsearch pods have a Ready status by entering the following command and observing the output: USD oc get pod -n openshift-logging --selector component=elasticsearch Example output NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m Verify that the Elasticsearch cluster status is green by entering the following command and observing the output: USD oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health Example output { "cluster_name" : "elasticsearch", "status" : "green", } Verify that the Elasticsearch cron jobs are created by entering the following commands and observing the output: USD oc project openshift-logging USD oc get cronjob Example output NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s Verify that the log store is updated to the correct version and the indices are green by entering the following command and observing the output: USD oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices Verify that the output includes the app-00000x , infra-00000x , audit-00000x , .security indices: Example 6.1. Sample output with indices in a green status Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0 Verify that the log visualizer is updated to the correct version by entering the following command and observing the output: USD oc get kibana kibana -o json Verify that the output includes a Kibana pod with the ready status: Example 6.2. Sample output with a ready Kibana pod [ { "clusterCondition": { "kibana-5fdd766ffd-nb2jj": [ { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" }, { "lastTransitionTime": "2020-06-30T14:11:07Z", "reason": "ContainerCreating", "status": "True", "type": "" } ] }, "deployment": "kibana", "pods": { "failed": [], "notReady": [] "ready": [] }, "replicaSets": [ "kibana-5fdd766ffd" ], "replicas": 1 } ]
[ "oc -n openshift-logging delete subscription <subscription>", "oc -n openshift-logging delete operatorgroup <operator_group_name>", "oc delete clusterserviceversion cluster-logging.<version>", "oc get operatorgroup <operator_group_name> -o yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-logging-f52cn namespace: openshift-logging spec: upgradeStrategy: Default status: namespaces: - \"\"", "oc get pod -n openshift-logging --selector component=elasticsearch", "NAME READY STATUS RESTARTS AGE elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk 2/2 Running 0 31m elasticsearch-cdm-1pbrl44l-2-5c6d87589f-gx5hk 2/2 Running 0 30m elasticsearch-cdm-1pbrl44l-3-88df5d47-m45jc 2/2 Running 0 29m", "oc exec -n openshift-logging -c elasticsearch elasticsearch-cdm-1pbrl44l-1-55b7546f4c-mshhk -- health", "{ \"cluster_name\" : \"elasticsearch\", \"status\" : \"green\", }", "oc project openshift-logging", "oc get cronjob", "NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE elasticsearch-im-app */15 * * * * False 0 <none> 56s elasticsearch-im-audit */15 * * * * False 0 <none> 56s elasticsearch-im-infra */15 * * * * False 0 <none> 56s", "oc exec -c elasticsearch <any_es_pod_in_the_cluster> -- indices", "Tue Jun 30 14:30:54 UTC 2020 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open infra-000008 bnBvUFEXTWi92z3zWAzieQ 3 1 222195 0 289 144 green open infra-000004 rtDSzoqsSl6saisSK7Au1Q 3 1 226717 0 297 148 green open infra-000012 RSf_kUwDSR2xEuKRZMPqZQ 3 1 227623 0 295 147 green open .kibana_7 1SJdCqlZTPWlIAaOUd78yg 1 1 4 0 0 0 green open infra-000010 iXwL3bnqTuGEABbUDa6OVw 3 1 248368 0 317 158 green open infra-000009 YN9EsULWSNaxWeeNvOs0RA 3 1 258799 0 337 168 green open infra-000014 YP0U6R7FQ_GVQVQZ6Yh9Ig 3 1 223788 0 292 146 green open infra-000015 JRBbAbEmSMqK5X40df9HbQ 3 1 224371 0 291 145 green open .orphaned.2020.06.30 n_xQC2dWQzConkvQqei3YA 3 1 9 0 0 0 green open infra-000007 llkkAVSzSOmosWTSAJM_hg 3 1 228584 0 296 148 green open infra-000005 d9BoGQdiQASsS3BBFm2iRA 3 1 227987 0 297 148 green open infra-000003 1-goREK1QUKlQPAIVkWVaQ 3 1 226719 0 295 147 green open .security zeT65uOuRTKZMjg_bbUc1g 1 1 5 0 0 0 green open .kibana-377444158_kubeadmin wvMhDwJkR-mRZQO84K0gUQ 3 1 1 0 0 0 green open infra-000006 5H-KBSXGQKiO7hdapDE23g 3 1 226676 0 295 147 green open infra-000001 eH53BQ-bSxSWR5xYZB6lVg 3 1 341800 0 443 220 green open .kibana-6 RVp7TemSSemGJcsSUmuf3A 1 1 4 0 0 0 green open infra-000011 J7XWBauWSTe0jnzX02fU6A 3 1 226100 0 293 146 green open app-000001 axSAFfONQDmKwatkjPXdtw 3 1 103186 0 126 57 green open infra-000016 m9c1iRLtStWSF1GopaRyCg 3 1 13685 0 19 9 green open infra-000002 Hz6WvINtTvKcQzw-ewmbYg 3 1 228994 0 296 148 green open infra-000013 KR9mMFUpQl-jraYtanyIGw 3 1 228166 0 298 148 green open audit-000001 eERqLdLmQOiQDFES1LBATQ 3 1 0 0 0 0", "oc get kibana kibana -o json", "[ { \"clusterCondition\": { \"kibana-5fdd766ffd-nb2jj\": [ { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" }, { \"lastTransitionTime\": \"2020-06-30T14:11:07Z\", \"reason\": \"ContainerCreating\", \"status\": \"True\", \"type\": \"\" } ] }, \"deployment\": \"kibana\", \"pods\": { \"failed\": [], \"notReady\": [] \"ready\": [] }, \"replicaSets\": [ \"kibana-5fdd766ffd\" ], \"replicas\": 1 } ]" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/logging/cluster-logging-upgrading
Chapter 4. KafkaSpec schema reference
Chapter 4. KafkaSpec schema reference Used in: Kafka Property Description kafka Configuration of the Kafka cluster. KafkaClusterSpec zookeeper Configuration of the ZooKeeper cluster. ZookeeperClusterSpec entityOperator Configuration of the Entity Operator. EntityOperatorSpec clusterCa Configuration of the cluster certificate authority. CertificateAuthority clientsCa Configuration of the clients certificate authority. CertificateAuthority cruiseControl Configuration for Cruise Control deployment. Deploys a Cruise Control instance when specified. CruiseControlSpec jmxTrans The jmxTrans property has been deprecated. JMXTrans is deprecated and related resources removed in AMQ Streams 2.5. As of AMQ Streams 2.5, JMXTrans is not supported anymore and this option is ignored. JmxTransSpec kafkaExporter Configuration of the Kafka Exporter. Kafka Exporter can provide additional metrics, for example lag of consumer group at topic/partition. KafkaExporterSpec maintenanceTimeWindows A list of time windows for maintenance tasks (that is, certificates renewal). Each time window is defined by a cron expression. string array
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkaspec-reference
Chapter 31. Adding the IdM CA service to an IdM server in a deployment with a CA
Chapter 31. Adding the IdM CA service to an IdM server in a deployment with a CA If your Identity Management (IdM) environment already has the IdM certificate authority (CA) service installed but a particular IdM server, idmserver , was installed as an IdM replica without a CA, you can add the CA service to idmserver by using the ipa-ca-install command. Note This procedure is identical for both the following scenarios: The IdM CA is a root CA. The IdM CA is subordinate to an external, root CA. Prerequisites You have root permissions on idmserver . The IdM server is installed on idmserver . Your IdM deployment has a CA installed on another IdM server. You know the IdM Directory Manager password. Procedure On idmserver , install the IdM Certificate Server CA:
[ "[root@idmserver ~] ipa-ca-install" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/adding-the-idm-ca-service-to-an-idm-server-in-a-deployment-with-a-ca_installing-identity-management
Chapter 4. Management Jobs
Chapter 4. Management Jobs Management Jobs assist in the cleaning of old data from automation controller, including system tracking information, tokens, job histories, and activity streams. You can use this if you have specific retention policies or need to decrease the storage used by your automation controller database. From the navigation panel, select Automation Execution Administration Management Jobs . The following job types are available for you to schedule and launch: Cleanup Activity Stream : Remove activity stream history older than a specified number of days Cleanup Expired Sessions : Remove expired browser sessions from the database Cleanup Job Details : Remove job history older than a specified number of days 4.1. Removing old activity stream data To remove older activity stream data, click the launch icon beside Cleanup Activity Stream . Enter the number of days of data you want to save and click Launch . 4.1.1. Scheduling deletion Use the following procedure to review or set a schedule for purging data marked for deletion: Procedure For a particular cleanup job, click the Schedules tab. Click the name of the job, Cleanup Activity Schedule in this example, to review the schedule settings. Click Edit schedule to change them. You can also click Create schedule to create a new schedule for this management job. Enter the appropriate details into the following fields and click : Schedule name required Start date/time required Time zone the entered Start Time should be in this time zone. Repeat frequency the appropriate options display as the update frequency is modified including data you do not want to include by specifying exceptions. Days of data to keep required - specify how much data you want to retain. The Details tab displays a description of the schedule and a list of the scheduled occurrences in the selected Local Time Zone. Note Jobs are scheduled in UTC. Repeating jobs that run at a specific time of day can move relative to a local time zone when Daylight Saving Time shifts occur. 4.1.2. Setting notifications Use the following procedure to review or set notifications associated with a management job: Procedure For a particular cleanup job, select the Notifications tab. If none exist, see Creating a notification template in Using automation execution . 4.2. Cleanup Expired OAuth2 Tokens To remove expired OAuth2 tokens, click the launch icon to Cleanup Expired OAuth2 Tokens . You can review or set a schedule for cleaning up expired OAuth2 tokens by performing the same procedure described for activity stream management jobs. For more information, see Scheduling deletion . You can also set or review notifications associated with this management job the same way as described in Setting notifications for activity stream management jobs. For more information, see Notifications in Using automation execution . 4.2.1. Cleanup Expired Sessions To remove expired sessions, click the launch icon beside Cleanup Expired Sessions . You can review or set a schedule for cleaning up expired sessions by performing the same procedure described for activity stream management jobs. For more information, see Scheduling deletion . You can also set or review notifications associated with this management job the same way as described in Notifications for activity stream management jobs. For more information, see Notifiers in Using automation execution . 4.2.2. Removing Old Job History To remove job history older than a specified number of days, click the launch icon beside Cleanup Job Details . Enter the number of days of data you want to save and click Launch . Note The initial job run for an automation controller resource, such as Projects, or Job Templates, are excluded from Cleanup Job Details , regardless of retention value. You can review or set a schedule for cleaning up old job history by performing the same procedure described for activity stream management jobs. For more information, see Scheduling deletion . You can also set or review notifications associated with this management job in the same way as described in Notifications for activity stream management jobs, or for more information, see Notifiers in Using automation execution .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/configuring_automation_execution/assembly-controller-management-jobs
Chapter 11. Configuring a System for Accessibility
Chapter 11. Configuring a System for Accessibility Accessibility in Red Hat Enterprise Linux 7 is ensured by the Orca screen reader, which is included in the default installation of the operating system. This chapter explains how a system administrator can configure a system to support users with a visual impairment. Orca reads information from the screen and communicates it to the user using: a speech synthesizer, which provides a speech output a braille display, which provides a tactile output For more information on Orca settings, see its help page . In order that Orca 's communication outputs function properly, the system administrator needs to: configure the brltty service, as described in Section 11.1, "Configuring the brltty Service" switch on the Always Show Universal Access Menu , as described in Section 11.2, "Switch On Always Show Universal Access Menu " enable the Festival speech synthesizer, as described in Section 11.3, "Enabling the Festival Speech Synthesis System " 11.1. Configuring the brltty Service The Braille display uses the brltty service to provide tactile output for visually impaired users. Enable the brltty Service The braille display cannot work unless brltty is running. By default, brltty is disabled. Enable brltty to be started on boot: Authorize Users to Use the Braille Display To set the users who are authorized to use the braille display, choose one of the following procedures, which have an equal effect. The procedure using the /etc/brltty.conf file is suitable even for the file systems where users or groups cannot be assigned to a file. The procedure using the /etc/brlapi.key file is suitable only for the file systems where users or groups can be assigned to a file. Setting Access to Braille Display by Using /etc/brltty.conf Open the /etc/brltty.conf file, and find the section called Application Programming Interface Parameters . Specify the users. To specify one or more individual users, list the users on the following line: To specify a user group, enter its name on the following line: Setting Access to Braille Display by Using /etc/brlapi.key Create the /etc/brlapi.key file. Change ownership of the /etc/brlapi.key to particular user or group. To specify an individual user: To specify a group: Adjust the content of /etc/brltty.conf to include this: Set the Braille Driver The braille-driver directive in /etc/brltty.conf specifies a two-letter driver identification code of the driver for the braille display. Setting the Braille Driver Decide whether you want to use the autodetection for finding the appropriate braille driver. If you want to use autodetection, leave braille driver specified to auto , which is the default option. Warning Autodetection tries all drivers. Therefore, it might take a long time or even fail. For this reason, setting up a particular braille driver is recommended. If you do not want to use the autodetection, specify the identification code of the required braille driver in the braille-driver directive. Choose the identification code of required braille driver from the list provided in /etc/brltty.conf , for example: You can also set multiple drivers, separated by commas, and autodetection is then performed among them. Set the Braille Device The braille-device directive in /etc/brltty.conf specifies the device to which the braille display is connected. The following device types are supported (see Table 11.1, "Braille Device Types and the Corresponding Syntax" ): Table 11.1. Braille Device Types and the Corresponding Syntax Braille Device Type Syntax of the Type serial device serial:path [a] USB device [serial-number] [b] Bluetooth device bluetooth:address [a] Relative paths are at /dev . [b] The brackets here indicate optionality. Examples of settings for particular devices: You can also set multiple devices, separated by commas, and each of them will be probed in turn. Warning If the device is connected by a serial-to-USB adapter, setting braille-device to usb: does not work. In this case, identify the virtual serial device that the kernel has created for the adapter. The virtual serial device can look like this: Set Specific Parameters for Particular Braille Displays If you need to set specific parameters for particular braille displays, use the braille-parameters directive in /etc/brltty.conf . The braille-parameters directive passes non-generic parameters through to the braille driver. Choose the required parameters from the list in /etc/brltty.conf . Set the Text Table The text-table directive in /etc/brltty.conf specifies which text table is used to encode the symbols. Relative paths to text tables are in the /etc/brltty/Text/ directory. Setting the Text Table Decide whether you want to use the autoselection for finding the appropriate text table. If you want to use the autoselection, leave text-table specified to auto , which is the default option. This ensures that local-based autoselection with fallback to en-nabcc is performed. If you do not want to use the autoselection, choose the required text-table from the list in /etc/brltty.conf . For example, to use the text table for American English: Set the Contraction Table The contraction-table directive in /etc/brltty.conf specifies which table is used to encode the abbreviations. Relative paths to particular contraction tables are in the /etc/brltty/Contraction/ directory. Choose the required contraction-table from the list in /etc/brltty.conf . For example, to use the contraction table for American English, grade 2: Warning If not specified, no contraction table is used. 11.2. Switch On Always Show Universal Access Menu To switch on the Orca screen reader, press the Super + Alt + S key combination. As a result, the Universal Access Menu icon is displayed on the top bar. Warning The icon disappears in case that the user switches off all of the provided options from the Universal Access Menu. Missing icon can cause difficulties to users with a visual impairment. System administrators can prevent the inaccessibility of the icon by switching on the Always Show Universal Access Menu . When the Always Show Universal Access Menu is switched on, the icon is displayed on the top bar even in the situation when all options from this menu are switched off. Switching On Always Show Universal Access Menu Open the Gnome settings menu, and click Universal Access . Switch on Always Show Universal Access Menu . Optional: Verify that the Universal Access Menu icon is displayed on the top bar even if all options from this menu are switched off. 11.3. Enabling the Festival Speech Synthesis System By default, Orca uses the eSpeak speech synthesizer, but it also supports the Festival Speech Synthesis System . Both eSpeak and Festival Speech Synthesis System (Festival) synthesize voice differently. Some users might prefer Festival to the default eSpeak synthesizer. To enable Festival, follow these steps: Installing Festival and Making it Running on Boot Install Festival: Make Festival running on boot: Create a new systemd unit file: Create a file in the /etc/systemd/system/ directory and make it executable. Ensure that the script in the /usr/bin/festival_server file is used to run Festival. Add the following content to the /etc/systemd/system/festival.service file: Notify systemd that a new festival.service file exists: Enable festival.service : Choose a Voice for Festival Festival provides multiples voices. To make a voice available, install the relevant package from the following list: festvox-awb-arctic-hts festvox-bdl-arctic-hts festvox-clb-arctic-hts festvox-kal-diphone festvox-ked-diphone festvox-rms-arctic-hts festvox-slt-arctic-hts hispavoces-pal-diphone hispavoces-sfl-diphone To see detailed information about a particular voice: To make the required voice available, install the package with this voice and then reboot:
[ "~]# systemctl enable brltty.service", "api-parameters Auth=user: user_1, user_2, ... # Allow some local user", "api-parameters Auth=group: group # Allow some local group", "~]# mcookie > /etc/brlapi.key", "~]# chown user_1 /etc/brlapi.key", "~]# chown group_1 /etc/brlapi.key", "api-parameters Auth=keyfile: /etc/brlapi.key", "braille-driver auto # autodetect", "braille-driver xw # XWindow", "braille-device serial:ttyS0 # First serial device braille-device usb: # First USB device matching braille driver braille-device usb:nnnnn # Specific USB device by serial number braille-device bluetooth:xx:xx:xx:xx:xx:xx # Specific Bluetooth device by address", "serial:ttyUSB0", "You can find the actual device name in the kernel messages on the device plug with the following command:", "~]# dmesg | fgrep ttyUSB0", "text-table auto # locale-based autoselection", "text-table en_US # English (United States)", "contraction-table en-us-g2 # English (US, grade 2)", "~]# yum install festival festival-freebsoft-utils", "~]# touch /etc/systemd/system/festival.service ~]# chmod 664 /etc/systemd/system/festival.service", "[Unit] Description=Festival speech synthesis server [Service] ExecStart=/usr/bin/festival_server Type=simple", "~]# systemctl daemon-reload ~]# systemctl start festival.service", "~]# systemctl enable festival.service", "~]# yum info package_name", "~]# yum install package_name ~]# reboot" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-Accessbility
Chapter 6. Installing a cluster on Azure with customizations
Chapter 6. Installing a cluster on Azure with customizations In OpenShift Container Platform version 4.13, you can install a customized cluster on infrastructure that the installation program provisions on Microsoft Azure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure account to host the cluster and determined the tested and validated region to deploy the cluster to. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . If you use customer-managed encryption keys, you prepared your Azure environment for encryption . 6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.4. Selecting an Azure Marketplace image If you are deploying an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker nodes. When obtaining your image, consider the following: While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify redhat as the publisher. If you are located in EMEA, specify redhat-limited as the publisher. The offer includes a rh-ocp-worker SKU and a rh-ocp-worker-gen1 SKU. The rh-ocp-worker SKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with the rh-ocp-worker-gen1 SKU. The rh-ocp-worker-gen1 SKU represents a Hyper-V version 1 VM image. Important Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances. Prerequisites You have installed the Azure CLI client (az) . Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client. Procedure Display all of the available OpenShift Container Platform images by running one of the following commands: North America: USD az vm image list --all --offer rh-ocp-worker --publisher redhat -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 EMEA: USD az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table Example output Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100 Note Regardless of the version of OpenShift Container Platform that you install, the correct version of the Azure Marketplace image to use is 4.8. If required, your VMs are automatically upgraded as part of the installation process. Inspect the image for your offer by running one of the following commands: North America: USD az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Review the terms of the offer by running one of the following commands: North America: USD az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Accept the terms of the offering by running one of the following commands: North America: USD az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> EMEA: USD az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version> Record the image details of your offer. You must update the compute section in the install-config.yaml file with values for publisher , offer , sku , and version before deploying the cluster. Sample install-config.yaml file with the Azure Marketplace worker nodes apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 4.8.2021122100 replicas: 3 6.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Microsoft Azure. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select azure as the platform to target. If you do not have a Microsoft Azure profile stored on your computer, specify the following Azure parameter values for your subscription and service principal: azure subscription id : The subscription ID to use for the cluster. Specify the id value in your account output. azure tenant id : The tenant ID. Specify the tenantId value in your account output. azure service principal client id : The value of the appId parameter for the service principal. azure service principal client secret : The value of the password parameter for the service principal. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster. Enter a descriptive name for your cluster. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on Azure". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 6.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane: hyperthreading: Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 6.6.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 6.4. Additional Azure parameters Parameter Description Values compute.platform.azure.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . compute.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . compute.platform.azure.osDisk.diskType Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . compute.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . compute.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . compute.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . compute.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . compute.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . compute.platform.azure.type Defines the Azure instance type for compute machines. String compute.platform.azure.zones The availability zones where the installation program creates compute machines. String list controlPlane.platform.azure.type Defines the Azure instance type for control plane machines. String controlPlane.platform.azure.zones The availability zones where the installation program creates control plane machines. String list platform.azure.defaultMachinePlatform.encryptionAtHost Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . platform.azure.defaultMachinePlatform.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . platform.azure.defaultMachinePlatform.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . platform.azure.defaultMachinePlatform.type The Azure instance type for control plane and compute machines. The Azure instance type. platform.azure.defaultMachinePlatform.zones The availability zones where the installation program creates compute and control plane machines. String list. controlPlane.platform.azure.encryptionAtHost Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . controlPlane.platform.azure.osDisk.diskEncryptionSet.resourceGroup The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . controlPlane.platform.azure.osDisk.diskEncryptionSet.name The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . controlPlane.platform.azure.osDisk.diskEncryptionSet.subscriptionId Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . controlPlane.platform.azure.osDisk.diskSizeGB The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . controlPlane.platform.azure.osDisk.diskType Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . controlPlane.platform.azure.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . controlPlane.platform.azure.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . platform.azure.baseDomainResourceGroupName The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . platform.azure.resourceGroupName The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . platform.azure.outboundType The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. LoadBalancer or UserDefinedRouting . The default is LoadBalancer . platform.azure.region The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . platform.azure.zone List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . platform.azure.defaultMachinePlatform.ultraSSDCapability Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . platform.azure.networkResourceGroupName The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. platform.azure.virtualNetwork The name of the existing VNet that you want to deploy your cluster to. String. platform.azure.controlPlaneSubnet The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.computeSubnet The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . platform.azure.cloudName The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . platform.azure.defaultMachinePlatform.vmNetworkingType Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. 6.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . Important You are required to use Azure virtual machines that have the premiumIO parameter set to true . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.6.3. Tested instance types for Azure The following Microsoft Azure instance types have been tested with OpenShift Container Platform. Example 6.1. Machine types based on 64-bit x86 architecture standardBasv2Family standardBSFamily standardBsv2Family standardDADSv5Family standardDASv4Family standardDASv5Family standardDCACCV5Family standardDCADCCV5Family standardDCADSv5Family standardDCASv5Family standardDCSv3Family standardDCSv2Family standardDDCSv3Family standardDDSv4Family standardDDSv5Family standardDLDSv5Family standardDLSv5Family standardDSFamily standardDSv2Family standardDSv2PromoFamily standardDSv3Family standardDSv4Family standardDSv5Family standardEADSv5Family standardEASv4Family standardEASv5Family standardEBDSv5Family standardEBSv5Family standardECACCV5Family standardECADCCV5Family standardECADSv5Family standardECASv5Family standardEDSv4Family standardEDSv5Family standardEIADSv5Family standardEIASv4Family standardEIASv5Family standardEIBDSv5Family standardEIBSv5Family standardEIDSv5Family standardEISv3Family standardEISv5Family standardESv3Family standardESv4Family standardESv5Family standardFXMDVSFamily standardFSFamily standardFSv2Family standardGSFamily standardHBrsv2Family standardHBSFamily standardHBv4Family standardHCSFamily standardHXFamily standardLASv3Family standardLSFamily standardLSv2Family standardLSv3Family standardMDSHighMemoryv3Family standardMDSMediumMemoryv2Family standardMDSMediumMemoryv3Family standardMIDSHighMemoryv3Family standardMIDSMediumMemoryv2Family standardMISHighMemoryv3Family standardMISMediumMemoryv2Family standardMSFamily standardMSHighMemoryv3Family standardMSMediumMemoryv2Family standardMSMediumMemoryv3Family StandardNCADSA100v4Family Standard NCASv3_T4 Family standardNCSv3Family standardNDSv2Family StandardNGADSV620v1Family standardNPSFamily StandardNVADSA10v5Family standardNVSv3Family standardXEISv4Family 6.6.4. Tested instance types for Azure on 64-bit ARM infrastructures The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform. Example 6.2. Machine types based on 64-bit ARM architecture standardBpsv2Family standardDPSv5Family standardDPDSv5Family standardDPLDSv5Family standardDPLSv5Family standardEPSv5Family standardEPDSv5Family StandardDpdsv6Family StandardDpldsv6Famil StandardDplsv6Family StandardDpsv6Family StandardEpdsv6Family StandardEpsv6Family 6.6.5. Sample customized install-config.yaml file for Azure You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - "1" - "2" - "3" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{"auths": ...}' 15 fips: false 16 sshKey: ssh-ed25519 AAAA... 17 1 10 13 15 Required. The installation program prompts you for this value. 2 6 If you do not provide these parameters and values, the installation program provides the default value. 3 7 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as Standard_D8s_v3 , for your machines if you disable simultaneous multithreading. 5 8 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 9 Specify a list of zones to deploy your machines to. For high availability, specify at least two zones. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 Specify the name of the resource group that contains the DNS zone for your base domain. 14 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 16 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 17 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.6.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Additional resources For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs . 6.7. Configuring the user-defined tags for Azure In OpenShift Container Platform, you can use the tags for grouping resources and for managing resource access and cost. You can define the tags on the Azure resources in the install-config.yaml file only during OpenShift Container Platform cluster creation. You cannot modify the user-defined tags after cluster creation. Support for user-defined tags is available only for the resources created in the Azure Public Cloud, and in OpenShift Container Platform 4.13 as a Technology Preview (TP). User-defined tags are not supported for the OpenShift Container Platform clusters upgraded to OpenShift Container Platform 4.13. User-defined and OpenShift Container Platform specific tags are applied only to the resources created by the OpenShift Container Platform installer and its core operators such as Machine api provider azure Operator, Cluster Ingress Operator, Cluster Image Registry Operator. By default, OpenShift Container Platform installer attaches the OpenShift Container Platform tags to the Azure resources. These OpenShift Container Platform tags are not accessible for the users. You can use the .platform.azure.userTags field in the install-config.yaml file to define the list of user-defined tags as shown in the following install-config.yaml file. Sample install-config.yaml file additionalTrustBundlePolicy: Proxyonly 1 apiVersion: v1 baseDomain: catchall.azure.devcluster.openshift.com 2 featureSet: TechPreviewNoUpgrade 3 compute: 4 - architecture: amd64 hyperthreading: Enabled 5 name: worker platform: {} replicas: 3 controlPlane: 6 architecture: amd64 hyperthreading: Enabled 7 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: user 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: os4-common 10 cloudName: AzurePublicCloud 11 outboundType: Loadbalancer region: southindia 12 userTags: 13 createdBy: user environment: dev 1 Defines the trust bundle policy. 2 Required. The baseDomain parameter specifies the base domain of your cloud provider. The installation program prompts you for this value. 3 You must set the featureSet field as TechPreviewNoUpgrade . 4 The configuration for the machines that comprise compute. The compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - . If you do not provide these parameters and values, the installation program provides the default value. 5 To enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 6 The configuration for the machines that comprise the control plane. The controlPlane section is a single mapping. The first line of the controlPlane section must not begin with a hyphen, - . You can use only one control plane pool. If you do not provide these parameters and values, the installation program provides the default value. 7 To enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . You cannot disable simultaneous multithreading in selected cluster machines. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 8 The installation program prompts you for this value. 9 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 10 Specifies the resource group for the base domain of the Azure DNS zone. 11 Specifies the name of the Azure cloud environment while configuring the Azure SDK with the Azure API endpoints. If you do not provide value, the default value is AzurePublicCloud . 12 Required. Specifies the name of the Azure region that hosts your cluster. The installation program prompts you for this value. 13 Defines the additional keys and values that the installation program adds as tags to all Azure resources that it creates. The user-defined tags have the following limitations: A tag key can have a maximum of 128 characters. A tag key must begin with a letter, end with a letter, number or underscore, and can contain only letters, numbers, underscores, periods, and hyphens. Tag keys are case-insensitive. Tag keys cannot be name . It cannot have prefixes such as kubernetes.io , openshift.io , microsoft , azure , and windows . A tag value can have a maximum of 256 characters. You can configure a maximum of 10 tags for resource group and resources. For more information about Azure tags, see Azure user-defined tags 6.8. Querying user-defined tags for Azure After creating the OpenShift Container Platform cluster, you can access the list of defined tags for the Azure resources. The format of the OpenShift Container Platform tags is kubernetes.io_cluster.<cluster_id>:owned . The cluster_id parameter is the value of .status.infrastructureName present in config.openshift.io/Infrastructure . Query the tags defined for Azure resources by running the following command: USD oc get infrastructures.config.openshift.io cluster -o=jsonpath-as-json='{.status.platformStatus.azure.resourceTags}' Example output [ [ { "key": "createdBy", "value": "user" }, { "key": "environment", "value": "dev" } ] ] 6.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "az vm image list --all --offer rh-ocp-worker --publisher redhat -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocpworker:4.8.2021122100 4.8.2021122100 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o table", "Offer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- -------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.8.2021122100 4.8.2021122100 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.8.2021122100 4.8.2021122100", "az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version>", "az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>", "apiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: azure: type: Standard_D4s_v5 osImage: publisher: redhat offer: rh-ocp-worker sku: rh-ocp-worker version: 4.8.2021122100 replicas: 3", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 controlPlane: 2 hyperthreading: Enabled 3 4 name: master platform: azure: encryptionAtHost: true ultraSSDCapability: Enabled osDisk: diskSizeGB: 1024 5 diskType: Premium_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id type: Standard_D8s_v3 replicas: 3 compute: 6 - hyperthreading: Enabled 7 name: worker platform: azure: ultraSSDCapability: Enabled type: Standard_D2s_v3 encryptionAtHost: true osDisk: diskSizeGB: 512 8 diskType: Standard_LRS diskEncryptionSet: resourceGroup: disk_encryption_set_resource_group name: disk_encryption_set_name subscriptionId: secondary_subscription_id zones: 9 - \"1\" - \"2\" - \"3\" replicas: 5 metadata: name: test-cluster 10 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 11 serviceNetwork: - 172.30.0.0/16 platform: azure: defaultMachinePlatform: ultraSSDCapability: Enabled baseDomainResourceGroupName: resource_group 12 region: centralus 13 resourceGroupName: existing_resource_group 14 outboundType: Loadbalancer cloudName: AzurePublicCloud pullSecret: '{\"auths\": ...}' 15 fips: false 16 sshKey: ssh-ed25519 AAAA... 17", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "additionalTrustBundlePolicy: Proxyonly 1 apiVersion: v1 baseDomain: catchall.azure.devcluster.openshift.com 2 featureSet: TechPreviewNoUpgrade 3 compute: 4 - architecture: amd64 hyperthreading: Enabled 5 name: worker platform: {} replicas: 3 controlPlane: 6 architecture: amd64 hyperthreading: Enabled 7 name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: user 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: baseDomainResourceGroupName: os4-common 10 cloudName: AzurePublicCloud 11 outboundType: Loadbalancer region: southindia 12 userTags: 13 createdBy: user environment: dev", "oc get infrastructures.config.openshift.io cluster -o=jsonpath-as-json='{.status.platformStatus.azure.resourceTags}'", "[ [ { \"key\": \"createdBy\", \"value\": \"user\" }, { \"key\": \"environment\", \"value\": \"dev\" } ] ]", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_azure/installing-azure-customizations
Chapter 48. ProcessService
Chapter 48. ProcessService 48.1. CountProcesses GET /v1/processcount CountProcesses returns the count of processes. 48.1.1. Description 48.1.2. Parameters 48.1.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 48.1.3. Return Type V1CountProcessesResponse 48.1.4. Content Type application/json 48.1.5. Responses Table 48.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1CountProcessesResponse 0 An unexpected error response. RuntimeError 48.1.6. Samples 48.1.7. Common object reference 48.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 48.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 48.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 48.1.7.3. V1CountProcessesResponse Field Name Required Nullable Type Description Format count Integer int32 48.2. GetProcessesByDeployment GET /v1/processes/deployment/{deploymentId} GetProcessesByDeployment returns the processes executed in the given deployment. 48.2.1. Description 48.2.2. Parameters 48.2.2.1. Path Parameters Name Description Required Default Pattern deploymentId X null 48.2.3. Return Type V1GetProcessesResponse 48.2.4. Content Type application/json 48.2.5. Responses Table 48.2. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetProcessesResponse 0 An unexpected error response. RuntimeError 48.2.6. Samples 48.2.7. Common object reference 48.2.7.1. ProcessSignalLineageInfo Field Name Required Nullable Type Description Format parentUid Long int64 parentExecFilePath String 48.2.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 48.2.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 48.2.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 48.2.7.4. StorageProcessIndicator Field Name Required Nullable Type Description Format id String deploymentId String containerName String podId String podUid String signal StorageProcessSignal clusterId String namespace String containerStartTime Date date-time imageId String 48.2.7.5. StorageProcessSignal Field Name Required Nullable Type Description Format id String A unique UUID for identifying the message We have this here instead of at the top level because we want to have each message to be self contained. containerId String time Date date-time name String args String execFilePath String pid Long int64 uid Long int64 gid Long int64 lineage List of string scraped Boolean lineageInfo List of ProcessSignalLineageInfo 48.2.7.6. V1GetProcessesResponse Field Name Required Nullable Type Description Format processes List of StorageProcessIndicator 48.3. GetGroupedProcessByDeploymentAndContainer GET /v1/processes/deployment/{deploymentId}/grouped/container GetGroupedProcessByDeploymentAndContainer returns all the processes executed grouped by deployment and container. 48.3.1. Description 48.3.2. Parameters 48.3.2.1. Path Parameters Name Description Required Default Pattern deploymentId X null 48.3.3. Return Type V1GetGroupedProcessesWithContainerResponse 48.3.4. Content Type application/json 48.3.5. Responses Table 48.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetGroupedProcessesWithContainerResponse 0 An unexpected error response. RuntimeError 48.3.6. Samples 48.3.7. Common object reference 48.3.7.1. ProcessSignalLineageInfo Field Name Required Nullable Type Description Format parentUid Long int64 parentExecFilePath String 48.3.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 48.3.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 48.3.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 48.3.7.4. StorageProcessIndicator Field Name Required Nullable Type Description Format id String deploymentId String containerName String podId String podUid String signal StorageProcessSignal clusterId String namespace String containerStartTime Date date-time imageId String 48.3.7.5. StorageProcessSignal Field Name Required Nullable Type Description Format id String A unique UUID for identifying the message We have this here instead of at the top level because we want to have each message to be self contained. containerId String time Date date-time name String args String execFilePath String pid Long int64 uid Long int64 gid Long int64 lineage List of string scraped Boolean lineageInfo List of ProcessSignalLineageInfo 48.3.7.6. V1GetGroupedProcessesWithContainerResponse Field Name Required Nullable Type Description Format groups List of V1ProcessNameAndContainerNameGroup 48.3.7.7. V1ProcessGroup Field Name Required Nullable Type Description Format args String signals List of StorageProcessIndicator 48.3.7.8. V1ProcessNameAndContainerNameGroup Field Name Required Nullable Type Description Format name String containerName String timesExecuted Long int64 groups List of V1ProcessGroup suspicious Boolean 48.4. GetGroupedProcessByDeployment GET /v1/processes/deployment/{deploymentId}/grouped GetGroupedProcessByDeployment returns all the processes executed grouped by deployment. 48.4.1. Description 48.4.2. Parameters 48.4.2.1. Path Parameters Name Description Required Default Pattern deploymentId X null 48.4.3. Return Type V1GetGroupedProcessesResponse 48.4.4. Content Type application/json 48.4.5. Responses Table 48.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetGroupedProcessesResponse 0 An unexpected error response. RuntimeError 48.4.6. Samples 48.4.7. Common object reference 48.4.7.1. ProcessSignalLineageInfo Field Name Required Nullable Type Description Format parentUid Long int64 parentExecFilePath String 48.4.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 48.4.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 48.4.7.3. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 48.4.7.4. StorageProcessIndicator Field Name Required Nullable Type Description Format id String deploymentId String containerName String podId String podUid String signal StorageProcessSignal clusterId String namespace String containerStartTime Date date-time imageId String 48.4.7.5. StorageProcessSignal Field Name Required Nullable Type Description Format id String A unique UUID for identifying the message We have this here instead of at the top level because we want to have each message to be self contained. containerId String time Date date-time name String args String execFilePath String pid Long int64 uid Long int64 gid Long int64 lineage List of string scraped Boolean lineageInfo List of ProcessSignalLineageInfo 48.4.7.6. V1GetGroupedProcessesResponse Field Name Required Nullable Type Description Format groups List of V1ProcessNameGroup 48.4.7.7. V1ProcessGroup Field Name Required Nullable Type Description Format args String signals List of StorageProcessIndicator 48.4.7.8. V1ProcessNameGroup Field Name Required Nullable Type Description Format name String timesExecuted Long int64 groups List of V1ProcessGroup
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 13", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 13", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Next available tag: 13" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/processservice
B.77. python-gudev
B.77. python-gudev B.77.1. RHBA-2010:0850 - python-gudev bug fix update An updated python-gudev package that fixes a bug is now available for Red Hat Enterprise Linux 6. Python-gudev is one of the core components for Red Hat Network (RHN) registration process. Bug Fix BZ# 637084 Under some circumstances, using the 'rhn_register' command to register a system with the Red Hat Network (RHN) might fail. When this issue is encountered, the 'rhn_register' command will return an error similar to: or With this update, the aforementioned errors are no longer returned and using the 'rhn_register' command works as expected. All users of python-gudev are advised to upgrade to this updated package, which resolves this issue.
[ "rhn_register Segmentation fault (core dumped)", "rhn_register ***MEMORY-ERROR***: rhn_register[11525]: GSlice: assertion failed: sinfo->n_allocated > 0 Aborted (core dumped)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/python-gudev
Appendix A. List of tickets by component
Appendix A. List of tickets by component Bugzilla and JIRA IDs are listed in this document for reference. Bugzilla bugs that are publicly accessible include a link to the ticket. Component Tickets 389-ds-base BZ#2033398 , BZ#2016014 , BZ#1817505 , BZ#1780842 NetworkManager BZ#1996617 , BZ#2001563 , BZ#2079849 , BZ#1920398 SLOF BZ#1910848 accel-config BZ#1843266 anaconda BZ#1990145 , BZ#2050140 , BZ#1914955 , BZ#1929105 ansible-collection-microsoft-sql BZ#2038256 , BZ#2057651 apr BZ#1819607 audit BZ#1906065 , BZ#1939406 , BZ#1921658 , BZ#1927884 authselect BZ#1892761 bind9.16 BZ#1873486 bind BZ#2013993 brltty BZ#2008197 certmonger BZ#1577570 clevis BZ#1949289 , BZ#2018292 cloud-init BZ#2023940, BZ#2026587, BZ#1750862 cockpit BZ#1666722 coreutils BZ#2030661 corosync-qdevice BZ#1784200 crash BZ#1906482 createrepo_c BZ#1992209 , BZ#1973588 crypto-policies BZ#2020295, BZ#2023734 , BZ#2023744 , BZ#1919155 , BZ#1660839 cups-container BZ#1913715 cups BZ#2032965 device-mapper-multipath BZ#2008101 , BZ#2009624, BZ#2011699 distribution BZ#1657927 dmidecode BZ#2027665 dnf-plugins-core BZ#1868047 dnf BZ#1986657 ec2-images BZ#1862930 edk2 BZ#1741615, BZ#1935497 fapolicyd BZ#1939379 , BZ#2054741 fence-agents BZ#1977588 , BZ#1775847 fido-device-onboard BZ#1989930 firewalld BZ#1980206, BZ#1871860 freeradius BZ#2030173 , BZ#1958979 galera BZ#2042306 gcc BZ#1996862 gdb BZ#2012818, BZ#1853140 glibc BZ#1934162 , BZ#2007327 , BZ#2023420 , BZ#1929928, BZ#2000374 gnome-shell-extensions BZ#1751336 , BZ#1717947 gnome-software BZ#1668760 gnutls BZ#1628553 golang BZ#2014088 grafana-pcp BZ#1993149 grafana BZ#1993214 grub2 BZ#1583445 hostapd BZ#2016946 initscripts BZ#1875485 ipa BZ#1731484 , BZ#1924707 , BZ#1664719 , BZ#1664718 js-d3-flame-graph BZ#1993194 kdump-anaconda-addon BZ#2086100 kernel BZ#1953926, BZ#2068429, BZ#1910885, BZ#2040171, BZ#2022903, BZ#2036863, BZ#1979382, BZ#1949614, BZ#1983635, BZ#1964761, BZ#2069047, BZ#2054656, BZ#1868526, BZ#1694705, BZ#1730502, BZ#1609288, BZ#1602962, BZ#1865745, BZ#1906870, BZ#1924016, BZ#1942888, BZ#1812577, BZ#1910358, BZ#1930576, BZ#2046396, BZ#1793389, BZ#1654962, BZ#1940674, BZ#1971506, BZ#2022359, BZ#2059262, BZ#1605216, BZ#1519039, BZ#1627455, BZ#1501618, BZ#1633143, BZ#1814836, BZ#1696451, BZ#1348508, BZ#1837187, BZ#1904496, BZ#1660337, BZ#1905243, BZ#1878207, BZ#1665295, BZ#1871863, BZ#1569610, BZ#1794513 kexec-tools BZ#2004000 krb5 BZ#1877991 libcap BZ#1950187 , BZ#2032813 libffi BZ#1875340 libgnome-keyring BZ#1607766 libguestfs BZ#1554735 libreswan BZ#2017352, BZ#1989050 libseccomp BZ#2019893 libselinux-python-2.8-module BZ#1666328 libssh BZ#1896651 libvirt BZ#2014369, BZ#1664592, BZ#1332758 , BZ#1528684 llvm-toolset BZ#2001133 log4j-2-module BZ#1937468 lsvpd BZ#1993557 lvm2 BZ#1496229, BZ#1768536 make BZ#2004246 mariadb BZ#1944653 , BZ#1942330 mesa BZ#1886147 net-snmp BZ#1908331 nfs-utils BZ#1592011 nftables BZ#2047821 nginx-1.20-module BZ#1991787 nispor BZ#1848817 nmstate BZ#2003976 , BZ#2004006 nss_nis BZ#1803161 nss BZ#1817533 , BZ#1645153 opencryptoki BZ#1984993 opencv BZ#2007780 , BZ#1886310 openmpi BZ#1866402 opensc BZ#1947025 openscap BZ#1970529 , BZ#2041781 openssh BZ#1926103, BZ#2015828 , BZ#2044354 openssl BZ#1810911 osbuild-composer BZ#1951936 , BZ#2056451 oscap-anaconda-addon BZ#1834716 , BZ#2075508 , BZ#1843932 , BZ#1665082 pacemaker BZ#1082146 , BZ#1470834, BZ#1376538 pcp BZ#1991763 , BZ#1629455 pcs BZ#1990784, BZ#1936833 , BZ#1619620, BZ#1847102, BZ#1851335 pcsc-lite BZ#1928154 , BZ#2014641 perl BZ#2021471 php BZ#1978356 pki-core BZ#1729215 , BZ#1628987 pmdk-1_fileformat_v6-module BZ#2009889 podman JIRA:RHELPLAN-92741, JIRA:RHELPLAN-108830, JIRA:RHELPLAN-77238 policycoreutils BZ#1731501 postfix BZ#1711885 powerpc-utils BZ#2028690, BZ#2022225 pykickstart BZ#1637872 qemu-kvm BZ#1982993 , BZ#2004416 , BZ#1662007, BZ#2020133 , BZ#2012373 , BZ#1740002 , BZ#1719687 , BZ#1651994 rear BZ#2048454, BZ#2049091 , BZ#2035939 , BZ#1868421, BZ#2083301 redhat-support-tool BZ#2018194 , BZ#2018195 , BZ#1767195, BZ#2064575 , BZ#1802026 restore BZ#1997366 rhel-system-roles BZ#1967321 , BZ#2040038 , BZ#2041627 , BZ#2034908 , BZ#1979714 , BZ#2005727 , BZ#2006231 , BZ#2021678 , BZ#2021683 , BZ#2047504 , BZ#2040812 , BZ#2064388 , BZ#2058655 , BZ#2058772 , BZ#2029605 , BZ#2057172 , BZ#2049747 , BZ#1854988, BZ#1893743 , BZ#1993379 , BZ#1993311 , BZ#2021661 , BZ#2016514 , BZ#1985022 , BZ#2016511 , BZ#2010327 , BZ#2012316 , BZ#2031521 , BZ#2054364 , BZ#2054363 , BZ#2008931 , BZ#1695634, BZ#1897565 , BZ#2054365 , BZ#1932678 , BZ#2057656 , BZ#2022458 , BZ#2057645 , BZ#2057661 , BZ#2021685 , BZ#2006081 rig BZ#1888705 rpm-ostree BZ#2032594 rpm BZ#1940895 , BZ#1688849 rsyslog BZ#1947907 , BZ#1679512 , JIRA:RHELPLAN-10431 rteval BZ#2012285 rust-toolset BZ#2002883 samba BZ#2013596 , BZ#2009213, JIRA:RHELPLAN-13195, Jira:RHELDOCS-16612 scap-security-guide BZ#1983061 , BZ#2053587 , BZ#2023569 , BZ#1990736, BZ#2002850 , BZ#2000264 , BZ#2058033 , BZ#2030966 , BZ#1884687 , BZ#1993826 , BZ#1956972 , BZ#2014485 , BZ#2021802 , BZ#2028428 , BZ#1858866 , BZ#1750755 , BZ#2038977 scap-workbench BZ#2051890 selinux-policy BZ#1860443 , BZ#1461914 sos BZ#1873185, BZ#2011413 spice BZ#1849563 sssd BZ#2015070 , BZ#1947671 strace BZ#2038992 subscription-manager BZ#2000883 , BZ#2049441 texinfo BZ#2022201 udica BZ#1763210 usbguard BZ#2000000 , BZ#1963271 vdo BZ#1949163 virt-manager BZ#1995125, BZ#2026985 wayland BZ#1673073 xfsdump BZ#2020494 xorg-x11-server BZ#1698565 other BZ#1839151 , BZ#1780124 , BZ#2089409, JIRA:RHELPLAN-100359, JIRA:RHELPLAN-103147, JIRA:RHELPLAN-103146, JIRA:RHELPLAN-79161, BZ#2046325 , JIRA:RHELPLAN-108438, JIRA:RHELPLAN-100175, BZ#2083036 , JIRA:RHELPLAN-102505, BZ#2062117 , JIRA:RHELPLAN-75169, JIRA:RHELPLAN-100174, JIRA:RHELPLAN-101137, JIRA:RHELPLAN-57941, JIRA:RHELPLAN-101133, JIRA:RHELPLAN-101138, JIRA:RHELPLAN-95126, JIRA:RHELPLAN-103855, JIRA:RHELPLAN-103579, BZ#2025814, BZ#2077770, BZ#1777138, BZ#1640697, BZ#1697896, BZ#1971061 , BZ#1959020, BZ#1961722, BZ#1659609, BZ#1687900 , BZ#1757877, BZ#1741436, JIRA:RHELPLAN-59111, JIRA:RHELPLAN-27987, JIRA:RHELPLAN-34199, JIRA:RHELPLAN-57914, JIRA:RHELPLAN-96940, BZ#1974622, BZ#2020301 , BZ#2028361, BZ#2041997 , BZ#2035158 , JIRA:RHELPLAN-109067, JIRA:RHELPLAN-115603, BZ#1690207, JIRA:RHELPLAN-1212, BZ#1559616, BZ#1889737 , JIRA:RHELPLAN-14047, BZ#1769727 , JIRA:RHELPLAN-27394, JIRA:RHELPLAN-27737, BZ#1906489 , JIRA:RHELPLAN-100039, BZ#1642765, JIRA:RHELPLAN-10304, BZ#1646541, BZ#1647725, BZ#1932222 , BZ#1686057 , BZ#1748980 , JIRA:RHELPLAN-71200, BZ#1827628, JIRA:RHELPLAN-45858, BZ#1871025 , BZ#1871953 , BZ#1874892, BZ#1916296, JIRA:RHELPLAN-100400, BZ#1926114 , BZ#1904251, BZ#2011208 , JIRA:RHELPLAN-59825, BZ#1920624 , JIRA:RHELPLAN-70700, BZ#1929173 , JIRA:RHELPLAN-85066, BZ#2006665 , JIRA:RHELPLAN-98983, BZ#2009113, BZ#1958250 , BZ#2038929 , BZ#2029338 , BZ#2061288 , BZ#2060759 , BZ#2055826, BZ#2059626
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.6_release_notes/list_of_tickets_by_component
Appendix A. Using an NFS Share for Content Storage
Appendix A. Using an NFS Share for Content Storage Your environment requires adequate hard disk space to fulfill content storage. In some situations, it is useful to use an NFS share to store this content. This appendix shows how to mount the NFS share on your Satellite Server's content management component. Important Use high-bandwidth, low-latency storage for the /var/lib/pulp file system. Red Hat Satellite has many I/O-intensive operations; therefore, high-latency, low-bandwidth storage might have issues with performance degradation. Procedure Create the NFS share. This example uses a share at nfs.example.com:/Satellite/pulp . Ensure this share provides the appropriate permissions to Satellite Server and its apache user. Stop Satellite services on your Satellite Server: Ensure Satellite Server has the nfs-utils package installed: You need to copy the existing contents of /var/lib/pulp to the NFS share. First, mount the NFS share to a temporary location: Copy the existing contents of /var/lib/pulp to the temporary location: Set the permissions for all files on the share to use the pulp user. Unmount the temporary storage location: Remove the existing contents of /var/lib/pulp : Edit the /etc/fstab file and add the following line: This makes the mount persistent across system reboots. Ensure to include the SELinux context. Enable the mount: Confirm the NFS share mounts to var/lib/pulp : Also confirm that the existing content exists at the mount on var/lib/pulp : Start Satellite services on your Satellite Server: Satellite Server now uses the NFS share to store content. Run a content synchronization to ensure the NFS share works as expected. For more information, see Section 6.6, "Synchronizing Repositories" .
[ "satellite-maintain service stop", "satellite-maintain packages install nfs-utils", "mkdir /mnt/temp mount -o rw nfs.example.com:/Satellite/pulp /mnt/temp", "cp -r /var/lib/pulp/* /mnt/temp/.", "umount /mnt/temp", "rm -rf /var/lib/pulp/*", "nfs.example.com:/Satellite/pulp /var/lib/pulp nfs rw,hard,intr,context=\"system_u:object_r:pulpcore_var_lib_t:s0\"", "mount -a", "df Filesystem 1K-blocks Used Available Use% Mounted on nfs.example.com:/Satellite/pulp 309506048 58632800 235128224 20% /var/lib/pulp", "ls /var/lib/pulp", "satellite-maintain service start" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/using_an_nfs_share_for_content_storage_content-management
Chapter 10. Updating SSSD Containers
Chapter 10. Updating SSSD Containers This procedure describes how you can update System Security Services Daemon (SSSD) containers if a new version of the rhel7/sssd image is released. Procedure Stop the SSSD service: If SSSD is running as a system container: If SSSD is running as an application container: Use the docker rm command to remove the image: Install the latest SSSD image: Start the SSSD service: If SSSD runs as a system container: If SSSD runs as an application container, start each container using the atomic start command:
[ "systemctl stop sssd", "atomic stop <container_name>", "docker rm rhel7/sssd", "atomic install rhel7/sssd", "systemctl start sssd", "atomic start <container_name>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/using_containerized_identity_management_services/sssd-centralized-ccache-updating-sssd-containers
3.16. Differences Between Red Hat Enterprise Linux 6 and 7
3.16. Differences Between Red Hat Enterprise Linux 6 and 7 The RPM Package Manager in Red Hat Enterprise Linux 7 ships with a number of feature changes that are not available in the older version of the RPM Package Manager shipped with Red Hat Enterprise Linux 6. This section provides more details on the changes that may affect you when building your Software Collection packages for both systems. Differences in library support are detailed in Section 3.5.3, "Software Collection Library Support in Red Hat Enterprise Linux 7" . Differences in SELinux support are documented in Section 3.15.1, "SELinux Support in Red Hat Enterprise Linux 7" . 3.16.1. The %license Macro The %license macro allows you to specify the license file to be installed by your package. The macro is only supported by the RPM Package Manager in Red Hat Enterprise Linux 7. When building your Software Collection package on both Red Hat Enterprise Linux 6 and 7, declare the %license macro for Red Hat Enterprise Linux 6 as follows: %{!?_licensedir:%global license %%doc} 3.16.2. Missing runtime Subpackage Dependencies On Red Hat Enterprise Linux 7, the scl tool automatically generates the needed Requires on the Software Collection runtime subpackage. This does not work on Red Hat Enterprise Linux 6. When building your Software Collection for that system, you need to explicitly specify the dependency on the runtime subpackage in each Software Collection package: Requires: %{?scl_prefix}runtime 3.16.3. The scl-package() Provides By design, building a Software Collection package generates a number of Provide: scl-package() tags. The purpose of these is to internally identify the built package as belonging to a specific Software Collection. The tags are detailed in the following table. Table 3.2. Provides in Red Hat Enterprise Linux 7 Software Collection package Provide USD{software_collection_1} scl-package(software_collection_1) USD{software_collection_1}-build scl-package(software_collection_1) USD{software_collection_1}-runtime scl-package(software_collection_1) Red Hat Enterprise Linux 6 ships with an older version of the RPM Package Manager, so as an exception, building the same package on Red Hat Enterprise Linux 6 only generates a single Provide: scl-package() tag, as detailed in the following table. This is an expected behavior and the differences are handled internally by the scl tool. Table 3.3. Provide in Red Hat Enterprise Linux 6 Software Collection package Provide USD{software_collection_1} scl-package(software_collection_1) Do not use these internally generated dependencies to list packages that belong to a particular Software Collection. For information on how to properly list Software Collection packages, see Section 1.5, "Listing Installed Software Collections" .
[ "%{!?_licensedir:%global license %%doc}", "Requires: %{?scl_prefix}runtime" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-differences_between_red_hat_enterprise_linux_6_and_7
Chapter 6. Adjusting the search size and time limit
Chapter 6. Adjusting the search size and time limit Some queries, such as requesting a list of IdM users, can return a very large number of entries. By tuning these search operations, you can improve the overall server performance when running the ipa *-find commands, such as ipa user-find , and when displaying corresponding lists in the Web UI. Search size limit Defines the maximum number of entries returned for a request sent to the server from a client's CLI or from a browser accessing the IdM Web UI. Default: 100 entries. Search time limit Defines the maximum time (in seconds) that the server waits for searches to run. Once the search reaches this limit, the server stops the search and returns the entries discovered in that time. Default: 2 seconds. If you set the values to -1 , IdM will not apply any limits when searching. Important Setting search size or time limits too high can negatively affect server performance. 6.1. Adjusting the search size and time limit in the command line The following procedure describes adjusting search size and time limits in the command line: Globally For a specific entry Procedure To display current search time and size limits in CLI, use the ipa config-show command: To adjust the limits globally for all queries, use the ipa config-mod command and add the --searchrecordslimit and --searchtimelimit options. For example: To temporarily adjust the limits only for a specific query, add the --sizelimit or --timelimit options to the command. For example: 6.2. Adjusting the search size and time limit in the Web UI The following procedure describes adjusting global search size and time limits in the IdM Web UI. Procedure Log in to the IdM Web UI. Click IPA Server . On the IPA Server tab, click Configuration . Set the required values in the Search Options area. Default values are: Search size limit: 100 entries Search time limit: 2 seconds Click Save at the top of the page.
[ "ipa config-show Search time limit: 2 Search size limit: 100", "ipa config-mod --searchrecordslimit=500 --searchtimelimit=5", "ipa user-find --sizelimit=200 --timelimit=120" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/tuning_performance_in_identity_management/adjusting-the-search-size-and-time-limit_tuning-performance-in-idm
Chapter 9. Disabling the web console in OpenShift Container Platform
Chapter 9. Disabling the web console in OpenShift Container Platform You can disable the OpenShift Container Platform web console. 9.1. Prerequisites Deploy an OpenShift Container Platform cluster. 9.2. Disabling the web console You can disable the web console by editing the consoles.operator.openshift.io resource. Edit the consoles.operator.openshift.io resource: USD oc edit consoles.operator.openshift.io cluster The following example displays the parameters from this resource that you can modify: apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: managementState: Removed 1 1 Set the managementState parameter value to Removed to disable the web console. The other valid values for this parameter are Managed , which enables the console under the cluster's control, and Unmanaged , which means that you are taking control of web console management.
[ "oc edit consoles.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: managementState: Removed 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/web_console/disabling-web-console
16.4. Installing virt-who Manually
16.4. Installing virt-who Manually This section will describe how to manually attach the subscription provided by the hypervisor. Procedure 16.2. How to attach a subscription manually List subscription information and find the Pool ID First you need to list the available subscriptions which are of the virtual type. Run the following command in a terminal as root: Note the Pool ID displayed. Copy this ID as you will need it in the step. Attach the subscription with the Pool ID Using the Pool ID you copied in the step run the attach command. Replace the Pool ID XYZ123 with the Pool ID you retrieved. Run the following command in a terminal as root:
[ "subscription-manager list --avail --match-installed | grep 'Virtual' -B12 Subscription Name: Red Hat Enterprise Linux ES (Basic for Virtualization) Provides: Red Hat Beta Oracle Java (for RHEL Server) Red Hat Enterprise Linux Server SKU: ------- Pool ID: XYZ123 Available: 40 Suggested: 1 Service Level: Basic Service Type: L1-L3 Multi-Entitlement: No Ends: 01/02/2017 System Type: Virtual", "subscription-manager attach --pool=XYZ123 Successfully attached a subscription for: Red Hat Enterprise Linux ES (Basic for Virtualization)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/install-virt-who-manually
Chapter 2. Selecting a cluster installation method and preparing it for users
Chapter 2. Selecting a cluster installation method and preparing it for users Before you install OpenShift Container Platform, decide what kind of installation process to follow and verify that you have all of the required resources to prepare the cluster for users. 2.1. Selecting a cluster installation type Before you install an OpenShift Container Platform cluster, you need to select the best installation instructions to follow. Think about your answers to the following questions to select the best option. 2.1.1. Do you want to install and manage an OpenShift Container Platform cluster yourself? If you want to install and manage OpenShift Container Platform yourself, you can install it on the following platforms: Alibaba Cloud Amazon Web Services (AWS) on 64-bit x86 instances Amazon Web Services (AWS) on 64-bit ARM instances Microsoft Azure on 64-bit x86 instances Microsoft Azure on 64-bit ARM instances Microsoft Azure Stack Hub Google Cloud Platform (GCP) on 64-bit x86 instances Google Cloud Platform (GCP) on 64-bit ARM instances Red Hat OpenStack Platform (RHOSP) IBM Cloud(R) IBM Z(R) or IBM(R) LinuxONE IBM Z(R) or IBM(R) LinuxONE for Red Hat Enterprise Linux (RHEL) KVM IBM Power(R) IBM Power(R) Virtual Server Nutanix VMware vSphere Bare metal or other platform agnostic infrastructure You can deploy an OpenShift Container Platform 4 cluster to both on-premise hardware and to cloud hosting services, but all of the machines in a cluster must be in the same data center or cloud hosting service. If you want to use OpenShift Container Platform but you do not want to manage the cluster yourself, you can choose from several managed service options. If you want a cluster that is fully managed by Red Hat, you can use OpenShift Dedicated . You can also use OpenShift as a managed service on Azure, AWS, IBM Cloud(R), or Google Cloud Platform. For more information about managed services, see the OpenShift Products page. If you install an OpenShift Container Platform cluster with a cloud virtual machine as a virtual bare metal, the corresponding cloud-based storage is not supported. 2.1.2. Have you used OpenShift Container Platform 3 and want to use OpenShift Container Platform 4? If you used OpenShift Container Platform 3 and want to try OpenShift Container Platform 4, you need to understand how different OpenShift Container Platform 4 is. OpenShift Container Platform 4 weaves the Operators that package, deploy, and manage Kubernetes applications and the operating system that the platform runs on, Red Hat Enterprise Linux CoreOS (RHCOS), together seamlessly. Instead of deploying machines and configuring their operating systems so that you can install OpenShift Container Platform on them, the RHCOS operating system is an integral part of the OpenShift Container Platform cluster. Deploying the operating system for the cluster machines is part of the installation process for OpenShift Container Platform. See Differences between OpenShift Container Platform 3 and 4 . Because you need to provision machines as part of the OpenShift Container Platform cluster installation process, you cannot upgrade an OpenShift Container Platform 3 cluster to OpenShift Container Platform 4. Instead, you must create a new OpenShift Container Platform 4 cluster and migrate your OpenShift Container Platform 3 workloads to them. For more information about migrating, see Migrating from OpenShift Container Platform 3 to 4 overview . Because you must migrate to OpenShift Container Platform 4, you can use any type of production cluster installation process to create your new cluster. 2.1.3. Do you want to use existing components in your cluster? Because the operating system is integral to OpenShift Container Platform, it is easier to let the installation program for OpenShift Container Platform stand up all of the infrastructure. These are called installer provisioned infrastructure installations. In this type of installation, you can provide some existing infrastructure to the cluster, but the installation program deploys all of the machines that your cluster initially needs. You can deploy an installer-provisioned infrastructure cluster without specifying any customizations to the cluster or its underlying machines to Alibaba Cloud , AWS , Azure , Azure Stack Hub , GCP , Nutanix . If you need to perform basic configuration for your installer-provisioned infrastructure cluster, such as the instance type for the cluster machines, you can customize an installation for Alibaba Cloud , AWS , Azure , GCP , Nutanix . For installer-provisioned infrastructure installations, you can use an existing VPC in AWS , vNet in Azure , or VPC in GCP . You can also reuse part of your networking infrastructure so that your cluster in AWS , Azure , GCP can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. If you have existing accounts and credentials on these clouds, you can re-use them, but you might need to modify the accounts to have the required permissions to install OpenShift Container Platform clusters on them. You can use the installer-provisioned infrastructure method to create appropriate machine instances on your hardware for vSphere , and bare metal . Additionally, for vSphere , you can also customize additional network parameters during installation. For some installer-provisioned infrastructure installations, for example on the VMware vSphere and bare metal platforms, the external traffic that reaches the ingress virtual IP (VIP) is not balanced between the default IngressController replicas. For vSphere and bare metal installer-provisioned infrastructure installations where exceeding the baseline IngressController router performance is expected, you must configure an external load balancer. Configuring an external load balancer achieves the performance of multiple IngressController replicas. For more information about the baseline IngressController performance, see Baseline Ingress Controller (router) performance . For more information about configuring an external load balancer, see Configuring an external load balancer . If you want to reuse extensive cloud infrastructure, you can complete a user-provisioned infrastructure installation. With these installations, you manually deploy the machines that your cluster requires during the installation process. If you perform a user-provisioned infrastructure installation on AWS , Azure , Azure Stack Hub , you can use the provided templates to help you stand up all of the required components. You can also reuse a shared VPC on GCP . Otherwise, you can use the provider-agnostic installation method to deploy a cluster into other clouds. You can also complete a user-provisioned infrastructure installation on your existing hardware. If you use RHOSP , IBM Z(R) or IBM(R) LinuxONE , IBM Z(R) and IBM(R) LinuxONE with RHEL KVM , IBM Power , or vSphere , use the specific installation instructions to deploy your cluster. If you use other supported hardware, follow the bare metal installation procedure. For some of these platforms, such as vSphere , and bare metal , you can also customize additional network parameters during installation. 2.1.4. Do you need extra security for your cluster? If you use a user-provisioned installation method, you can configure a proxy for your cluster. The instructions are included in each installation procedure. If you want to prevent your cluster on a public cloud from exposing endpoints externally, you can deploy a private cluster with installer-provisioned infrastructure on AWS , Azure , or GCP . If you need to install your cluster that has limited access to the internet, such as a disconnected or restricted network cluster, you can mirror the installation packages and install the cluster from them. Follow detailed instructions for user provisioned infrastructure installations into restricted networks for AWS , GCP , IBM Z(R) or IBM(R) LinuxONE , IBM Z(R) or IBM(R) LinuxONE with RHEL KVM , IBM Power(R) , vSphere , or bare metal . You can also install a cluster into a restricted network using installer-provisioned infrastructure by following detailed instructions for AWS , GCP , IBM Cloud(R) , Nutanix , RHOSP , and vSphere . If you need to deploy your cluster to an AWS GovCloud region , AWS China region , or Azure government region , you can configure those custom regions during an installer-provisioned infrastructure installation. You can also configure the cluster machines to use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation during installation. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 2.2. Preparing your cluster for users after installation Some configuration is not required to install the cluster but recommended before your users access the cluster. You can customize the cluster itself by customizing the Operators that make up your cluster and integrate you cluster with other required systems, such as an identity provider. For a production cluster, you must configure the following integrations: Persistent storage An identity provider Monitoring core OpenShift Container Platform components 2.3. Preparing your cluster for workloads Depending on your workload needs, you might need to take extra steps before you begin deploying applications. For example, after you prepare infrastructure to support your application build strategy , you might need to make provisions for low-latency workloads or to protect sensitive workloads . You can also configure monitoring for application workloads. If you plan to run Windows workloads , you must enable hybrid networking with OVN-Kubernetes during the installation process; hybrid networking cannot be enabled after your cluster is installed. 2.4. Supported installation methods for different platforms You can perform different types of installations on different platforms. Note Not all installation options are supported for all platforms, as shown in the following tables. A checkmark indicates that the option is supported and links to the relevant section. Table 2.1. Installer-provisioned infrastructure options Alibaba AWS (64-bit x86) AWS (64-bit ARM) Azure (64-bit x86) Azure (64-bit ARM) Azure Stack Hub GCP (64-bit x86) GCP (64-bit ARM) Nutanix RHOSP Bare metal (64-bit x86) Bare metal (64-bit ARM) vSphere IBM Cloud(R) IBM Z(R) IBM Power(R) IBM Power(R) Virtual Server Default [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Custom [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Network customization [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Restricted network [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Private clusters [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Existing virtual private networks [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Government regions [✓] [✓] Secret regions [✓] China regions [✓] Table 2.2. User-provisioned infrastructure options Alibaba AWS (64-bit x86) AWS (64-bit ARM) Azure (64-bit x86) Azure (64-bit ARM) Azure Stack Hub GCP (64-bit x86) GCP (64-bit ARM) Nutanix RHOSP Bare metal (64-bit x86) Bare metal (64-bit ARM) vSphere IBM Cloud(R) IBM Z(R) IBM Z(R) with RHEL KVM IBM Power(R) Platform agnostic Custom [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Network customization [✓] [✓] [✓] Restricted network [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] [✓] Shared VPC hosted outside of cluster project [✓] [✓]
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installation_overview/installing-preparing
7.188. powerpc-utils
7.188. powerpc-utils 7.188.1. RHBA-2013:0384 - powerpc-utils bug fix and enhancement update Updated powerpc-utils packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The powerpc-utils packages provide various utilities for the PowerPC platform. Note The powerpc-utils packages have been upgraded to upstream version 1.2.13, which provides a number of bug fixes and enhancements over the version, including support for physical Ethernet devices. The snap and hvcsadmin scripts now use the "use strict" construct to prevent a Perl interpreter from allowing usage of unsafe constructs, such as symbolic references, undeclared variables and using strings without quotation marks. The snap script now also enables to add a hostname and timestamp to its output file name by specifying the "-t" option. (BZ#822656) Bug Fixes BZ#739699 The bootlist command is used to read and modify the bootlist in NVRAM so that a system can boot from the correct device. Previously, when using a multipath device as a boot device, the bootlist command used its Linux logical name. However, Open Firmware, which is used on IBM POWER systems, is unable to parse Linux logical names. Therefore booting from a multipath device on IBM POWER systems failed. This update modifies the bootlist script so that bootlist now supports multipath devices as a parameter. The script converts Linux logical names of multipath devices to the path names that are parsable by Open Firmware. Booting from a multipath device on IBM POWER systems now succeeds as expected. BZ#857841 Previously, the "hvcsadmin -status" command did not provide any output if no IBM hypervisor virtual console server (hvcs) adapters were found on the system. This update corrects the hvcsadmin script so that when executing the "hvcsadmin -status" command, the user can now see a message indicating that no hvcs adapters were found. BZ#870212 The lsdevinfo script did not previously take into consideration the "status" attribute for Ethernet devices. This attribute is essential for the End-to-End Virtual Device View feature so the feature did not work without it. This update modifies lsdevinfo so the script now also checks the status of Ethernet devices and sets the status attribute to 1. The End-to-End Virtual Device View feature now works as expected. All users of powerpc-utils are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/powerpc-utils
Chapter 3. Setting Up DM Multipath
Chapter 3. Setting Up DM Multipath This chapter provides step-by-step example procedures for configuring DM Multipath. It includes the following procedures: Basic DM Multipath setup Ignoring local disks Adding more devices to the configuration file Starting multipath in the initramfs file system 3.1. Setting Up DM Multipath Before setting up DM Multipath on your system, ensure that your system has been updated and includes the device-mapper-multipath package. You set up multipath with the mpathconf utility, which creates the multipath configuration file /etc/multipath.conf . If the /etc/multipath.conf file already exists, the mpathconf utility will edit it. If the /etc/multipath.conf file does not exist, the mpathconf utility will use the /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf file as the starting file. If the /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf file does not exist the mpathconf utility will create the /etc/multipath.conf file from scratch. For more information on the mpathconf utility, see the mpathconf(8) man page. If you do not need to edit the /etc/multipath.conf file, you can set up DM Multipath for a basic failover configuration by running the following command. This command enables the multipath configuration file and starts the multipathd daemon. If you need to edit the /etc/multipath.conf file before starting the multipathd daemon. use the following procedure to set up DM Multipath for a basic failover configuration. Enter the mpathconf command with the --enable option specified: For information on additional options to the mpathconf command you may require, see the mpathconf man page or enter the mpathconf command with the --help option specified. Edit the /etc/multipath.conf file if necessary. The default settings for DM Multipath are compiled in to the system and do not need to be explicitly set in the /etc/multipath.conf file. The default value of path_grouping_policy is set to failover , so in this example you do not need to edit the /etc/multipath.conf file. For information on changing the values in the configuration file to something other than the defaults, see Chapter 4, The DM Multipath Configuration File . The initial defaults section of the configuration file configures your system so that the names of the multipath devices are of the form mpath n ; without this setting, the names of the multipath devices would be aliased to the WWID of the device. Save the configuration file and exit the editor, if necessary. Execute the following command: Since the value of user_friendly_names is set to yes in the configuration file, the multipath devices will be created as /dev/mapper/mpath n . For information on setting the name of the device to an alias of your choosing, see Chapter 4, The DM Multipath Configuration File . If you do not want to use user friendly names, you can enter the following command: Note If you find that you need to edit the multipath configuration file after you have started the multipath daemon, you must execute the systemctl reload multipathd.service command for the changes to take effect.
[ "mpathconf --enable --with_multipathd y", "mpathconf --enable", "mpathconf --help usage: /sbin/mpathconf <command> Commands: Enable: --enable Disable: --disable Set user_friendly_names (Default y): --user_friendly_names <y|n> Set find_multipaths (Default y): --find_multipaths <y|n> Load the dm-multipath modules on enable (Default y): --with_module <y|n> start/stop/reload multipathd (Default n): --with_multipathd <y|n>", "systemctl start multipathd.service", "mpathconf --enable --user_friendly_names n" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/mpio_setup
12.4. Updating an Object Class
12.4. Updating an Object Class This section describes how to update an object class using the command line and the web console. 12.4.1. Updating an Object Class Using the Command Line Use the dsconf utility to update an object class entry. For example: For further details about object class definitions, see Section 12.1.2, "Object Classes" . 12.4.2. Updating an Object Class Using the Web Console To update an object class using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Select Schema Objectclasses . Click the Choose Action button to the right of the object class entry you want to edit. Select Edit Object Class . Update the parameters. Click Save .
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com schema objectclasses replace examplePerson --oid=\"2.16.840.1133730.2.123\" --desc=\"Example Person Object Class\" --sup=\"inetOrgPerson\" --kind=\"AUXILIARY\" --must=\"cn\" --may exampleDisplayName exampleAlias" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/updating_an_object_class
Chapter 10. IO Scheduler and block IO Tapset
Chapter 10. IO Scheduler and block IO Tapset This family of probe points is used to probe block IO layer and IO scheduler activities. It contains the following probe points:
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/iosched-dot-stp
4.6. Devices
4.6. Devices mpt2sas lockless mode The mpt2sas driver is fully supported. However, when used in the lockless mode, the driver is a Technology Preview. Package: kernel-2.6.32-431
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/devices_tp
22.4. Storage parameters
22.4. Storage parameters Table 22.1. Storage Module Parameters Hardware Module Parameters 3ware Storage Controller and 9000 series 3w-xxxx.ko, 3w-9xxx.ko Adaptec Advanced Raid Products, Dell PERC2, 2/Si, 3/Si, 3/Di, HP NetRAID-4M, IBM ServeRAID, and ICP SCSI driver aacraid.ko nondasd - Control scanning of hba for nondasd devices. 0=off, 1=on dacmode - Control whether dma addressing is using 64 bit DAC. 0=off, 1=on commit - Control whether a COMMIT_CONFIG is issued to the adapter for foreign arrays. This is typically needed in systems that do not have a BIOS. 0=off, 1=on startup_timeout - The duration of time in seconds to wait for adapter to have it's kernel up and running. This is typically adjusted for large systems that do not have a BIOS aif_timeout - The duration of time in seconds to wait for applications to pick up AIFs before deregistering them. This is typically adjusted for heavily burdened systems. numacb - Request a limit to the number of adapter control blocks (FIB) allocated. Valid values are 512 and down. Default is to use suggestion from Firmware. acbsize - Request a specific adapter control block (FIB) size. Valid values are 512, 2048, 4096 and 8192. Default is to use suggestion from Firmware. Adaptec 28xx, R9xx, 39xx AHA-284x, AHA-29xx, AHA-394x, AHA-398x, AHA-274x, AHA-274xT, AHA-2842, AHA-2910B, AHA-2920C, AHA-2930/U/U2, AHA-2940/W/U/UW/AU/, U2W/U2/U2B/, U2BOEM, AHA-2944D/WD/UD/UWD, AHA-2950U2/W/B, AHA-3940/U/W/UW/, AUW/U2W/U2B, AHA-3950U2D, AHA-3985/U/W/UW, AIC-777x, AIC-785x, AIC-786x, AIC-787x, AIC-788x , AIC-789x, AIC-3860 aic7xxx.ko verbose - Enable verbose/diagnostic logging allow_memio - Allow device registers to be memory mapped debug - Bitmask of debug values to enable no_probe - Toggle EISA/VLB controller probing probe_eisa_vl - Toggle EISA/VLB controller probing no_reset - Supress initial bus resets extended - Enable extended geometry on all controllers periodic_otag - Send an ordered tagged transaction periodically to prevent tag starvation. This may be required by some older disk drives or RAID arrays. tag_info:<tag_str> - Set per-target tag depth global_tag_depth:<int> - Global tag depth for every target on every bus seltime:<int> - Selection Timeout (0/256ms,1/128ms,2/64ms,3/32ms) IBM ServeRAID ips.ko LSI Logic MegaRAID Mailbox Driver megaraid_mbox.ko unconf_disks - Set to expose unconfigured disks to kernel (default=0) busy_wait - Max wait for mailbox in microseconds if busy (default=10) max_sectors - Maximum number of sectors per IO command (default=128) cmd_per_lun - Maximum number of commands per logical unit (default=64) fast_load - Faster loading of the driver, skips physical devices! (default=0) debug_level - Debug level for driver (default=0) Emulex LightPulse Fibre Channel SCSI driver lpfc.ko lpfc_poll - FCP ring polling mode control: 0 - none, 1 - poll with interrupts enabled 3 - poll and disable FCP ring interrupts lpfc_log_verbose - Verbose logging bit-mask lpfc_lun_queue_depth - Max number of FCP commands we can queue to a specific LUN lpfc_hba_queue_depth - Max number of FCP commands we can queue to a lpfc HBA lpfc_scan_down - Start scanning for devices from highest ALPA to lowest lpfc_nodev_tmo - Seconds driver will hold I/O waiting for a device to come back lpfc_topology - Select Fibre Channel topology lpfc_link_speed - Select link speed lpfc_fcp_class - Select Fibre Channel class of service for FCP sequences lpfc_use_adisc - Use ADISC on rediscovery to authenticate FCP devices lpfc_ack0 - Enable ACK0 support lpfc_cr_delay - A count of milliseconds after which an interrupt response is generated lpfc_cr_count - A count of I/O completions after which an interrupt response is generated lpfc_multi_ring_support - Determines number of primary SLI rings to spread IOCB entries across lpfc_fdmi_on - Enable FDMI support lpfc_discovery_threads - Maximum number of ELS commands during discovery lpfc_max_luns - Maximum allowed LUN lpfc_poll_tmo - Milliseconds driver will wait between polling FCP ring HP Smart Array cciss.ko LSI Logic MPT Fusion mptbase.ko mptctl.ko mptfc.ko mptlan.ko mptsas.ko mptscsih.ko mptspi.ko mpt_msi_enable - MSI Support Enable mptfc_dev_loss_tmo - Initial time the driver programs the transport to wait for an rport to return following a device loss event. mpt_pt_clear - Clear persistency table mpt_saf_te - Force enabling SEP Processor QLogic Fibre Channel Driver qla2xxx.ko ql2xlogintimeout - Login timeout value in seconds. qlport_down_retry - Maximum number of command retries to a port that returns a PORT-DOWN status ql2xplogiabsentdevice - Option to enable PLOGI to devices that are not present after a Fabric scan. ql2xloginretrycount - Specify an alternate value for the NVRAM login retry count. ql2xallocfwdump - Option to enable allocation of memory for a firmware dump during HBA initialization. Default is 1 - allocate memory. extended_error_logging - Option to enable extended error logging. ql2xfdmienable - Enables FDMI registratons. NCR, Symbios and LSI 8xx and 1010 sym53c8xx cmd_per_lun - The maximum number of tags to use by default tag_ctrl - More detailed control over tags per LUN burst - Maximum burst. 0 to disable, 255 to read from registers led - Set to 1 to enable LED support diff - 0 for no differential mode, 1 for BIOS, 2 for always, 3 for not GPIO3 irqm - 0 for open drain, 1 to leave alone, 2 for totem pole buschk - 0 to not check, 1 for detach on error, 2 for warn on error hostid - The SCSI ID to use for the host adapters verb - 0 for minimal verbosity, 1 for normal, 2 for excessive debug - Set bits to enable debugging settle - Settle delay in seconds. Default 3 nvram - Option currently not used excl - List ioport addresses here to prevent controllers from being attached safe - Set other settings to a "safe mode"
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-modules-scsi
Preface
Preface Red Hat Trusted Application Pipeline (RHTAP) is not really a single product. Instead, it is a set of products that combine to form a highly automated, customizable, and secure platform for building applications. By default, RHTAP includes the following products: Advanced Cluster Security (ACS) : to scan your artifacts for vulnerabilities. Developer Hub : a self-service portal, to consolidate management of applications across their lifecycle. Enterprise Contract : to validate your artifacts against customizable policies. OpenShift GitOps : to manage Kubernetes deployments and their infrastructure. OpenShift Pipelines : to enable automation and provide visibility for continuous integration and continuous delivery (CI/CD) of software. Quay.io : a container registry, to store your artifacts. Trusted Artifact Signer : to sign and validate the artifacts that RHTAP produces. Trusted Profile Analyzer : to deliver actionable information about your security posture. You can see exactly which versions of these products RHTAP supports in the compatibility and support matrix of our Release notes . Note Red Hat Trusted Application Pipeline supports many alternatives to this default combination of products. Later in the installation process, this documentation explains how to customize your deployment to meet your needs. Because a fully-operational instance of RHTAP involves all of the products listed above, installing RHTAP takes some effort. However, we have automated the vast majority of this process with an installer tool packaged as a container image. Be aware that the RHTAP installer is not a manager: it does not support upgrades. The installer generates your first deployment of RHTAP. But after installation, you must manage each product within RHTAP separately. And while the installer can be run multiple times, doing so after manually changing the configuration of a product may have unpredictable results. Additionally, the products that the installer deploys are production ready, but they are sized for a proof of concept or a very small team. For larger teams, manual reconfiguration of the products is most likely necessary and should be done by following procedures documented for each individual product. Lastly, please be aware that the RHTAP subscription only includes Red Hat Developer Hub, Red Hat Trusted Artifact Signer, Red Hat Trusted Profile Analyzer, and Red Hat Enterprise Contract. The RHTAP installer deploys all the other products listed above, too. But to use them, you must purchase a subscription for OpenShift Plus. Installation steps To install RHTAP using the installer, you must complete the following procedures. Configuring GitHub for RHTAP (Optional) Customizing your installation Installing RHTAP in your cluster (Optional) Completing integrations after installation The following pages of this document explain each of those installation steps in detail.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/installing_red_hat_trusted_application_pipeline/pr01
Chapter 4. Securing the JBoss EAP management console with an OpenID provider
Chapter 4. Securing the JBoss EAP management console with an OpenID provider You can secure the JBoss EAP management console with an external identity provider, such as Red Hat build of Keycloak, using OIDC. By using an external identity provider, you can delegate authentication to the identity provider. To secure the JBoss EAP management console using OIDC, follow these procedures: Configuring Red Hat build of Keycloak to secure JBoss EAP management console Securing the JBoss EAP management console using OpenID Connect 4.1. JBoss EAP management console security with OIDC You can secure the JBoss EAP management console with OpenID Connect (OIDC) by configuring an OIDC provider, such as Red Hat build of Keycloak, and the elytron-oidc-client subsystem. Important Securing the management console of JBoss EAP running as a managed domain with OIDC is not supported. JBoss EAP management console security with OIDC works as follows: When you configure a secure-server resource in the elytron-oidc-client subsystem, the JBoss EAP management console redirects to the OIDC provider login page for login. JBoss EAP then uses the secure-deployment resource configuration to secure the management interface with bearer token authentication. Note OIDC relies on accessing a web application in a browser. Therefore, the JBoss EAP management CLI can't be secured with OIDC. RBAC support You can configure and assign roles in the OIDC provider to implement role-based access control (RBAC) to the JBoss EAP management console. JBoss EAP includes or excludes the users roles for RBAC as defined in the JBoss EAP RBAC configuration. For more information about RBAC, see Role-Based Access Control in the JBoss EAP 7.4 Security Architecture guide. Additional resources Configuring Red Hat build of Keycloak to secure JBoss EAP management console Securing the JBoss EAP management console using OpenID Connect 4.2. Configuring Red Hat build of Keycloak to secure JBoss EAP management console Configure the required users, roles, and clients in the OpenID Connect (OIDC) provider to secure the JBoss EAP management console. Two clients are required to secure the management console with OIDC. The clients must be configured as follows: A client configured for standard flow. A client configured as bearer-only client. The following procedure outlines the minimum steps required to get started with securing the JBoss EAP management console using OIDC for testing purposes. For detailed configurations, see the Red Hat build of Keycloak documentation . Prerequisites You have administrator access to Red Hat build of Keycloak. Red Hat build of Keycloak is running. Procedure Create a realm in Red Hat build of Keycloak using the Red Hat build of Keycloak admin console; for example, example_jboss_infra . You will use this realm to create the required users, roles, and clients. For more information, see Creating a realm . Create a user. For example, user1 . For more information, see Creating users . Create a password for the user. For example, passwordUser1 . For more information, see Setting a password for a user . Create a role. For example, Administrator . To enable role-based access control (RBAC) in JBoss EAP, the name should be one of the standard RBAC roles like Administrator . For more information about RBAC in JBoss EAP, see Role-Based Access Control in the JBoss EAP 7.4 Security Architecture guide. For more information about creating roles in Red Hat build of Keycloak, see Creating a realm role . Assign roles to users. For more information, see Assigning role mappings . Create an OpenID Connect client, for example, jboss-console . Ensure that the following capability configuration values are checked: Standard flow Direct access grants Set the following attributes at the minimum on the Login settings page: Set Valid Redirect URIs to the management console URI. For example, http://localhost:9990 . Set Web Origins to the management console URI. For example, http://localhost:9990 . Create another OpenID Connect client, for example, jboss-management , as a bearer-only client. In capability configuration, uncheck the following options: Standard flow Direct access grants You do not need to specify any fields on the Login settings page. You can now secure the JBoss EAP management console by using the clients you defined. For more information, see Securing the JBoss EAP management console using OpenID Connect . Additional resources JBoss EAP management console security with OIDC 4.3. Securing the JBoss EAP management console using OpenID Connect When you secure the JBoss EAP management console using OpenID Connect (OIDC), JBoss EAP redirects to the OIDC provider for users to log in to the management console. Prerequisites You have configured the required clients in the OIDC provider. For more information, see Configuring Red Hat build of Keycloak to secure JBoss EAP management console . Procedure Configure the OIDC provider in the elytron-oidc-client subsystem. Syntax Example Create a secure-deployment resource called wildfly-management to protect the management interface. Syntax Example OPTIONAL: You can enable role-based access control (RBAC) using the following commands. Create a secure-server resource called wildfly-console that references the jboss-console client. Syntax Example Important The JBoss EAP management console requires that the secure-server resource be specifically named wildfly-console . Verification Access the management console. By default, the management console is available at http://localhost:9990 . You are redirected to the OIDC provider. Log in with the credentials of the user you created in the OIDC provider. The JBoss EAP management console is now secured with OIDC. Additional resources JBoss EAP management console security with OIDC elytron-oidc-client subsystem attributes
[ "/subsystem=elytron-oidc-client/provider=keycloak:add(provider-url= <OIDC_provider_URL> )", "/subsystem=elytron-oidc-client/provider=keycloak:add(provider-url=http://localhost:8180/realms/example_jboss_infra)", "/subsystem=elytron-oidc-client/secure-deployment=wildfly-management:add(provider= <OIDC_provider_name> ,client-id= <OIDC_client_name> ,principal-attribute= <attribute_to_use_as_principal> ,bearer-only=true,ssl-required= <internal_or_external> )", "/subsystem=elytron-oidc-client/secure-deployment=wildfly-management:add(provider=keycloak,client-id=jboss-management,principal-attribute=preferred_username,bearer-only=true,ssl-required=EXTERNAL)", "/core-service=management/access=authorization:write-attribute(name=provider,value=rbac) /core-service=management/access=authorization:write-attribute(name=use-identity-roles,value=true)", "/subsystem=elytron-oidc-client/secure-server=wildfly-console:add(provider= <OIDC_provider_name> ,client-id= <OIDC_client_name> ,public-client=true)", "/subsystem=elytron-oidc-client/secure-server=wildfly-console:add(provider=keycloak,client-id=jboss-console,public-client=true)" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_single_sign-on_with_jboss_eap/securing-the-jboss-eap-management-console-with-an-openid-provider_default
Chapter 84. LDAP
Chapter 84. LDAP Since Camel 1.5 Only producer is supported The LDAP component allows you to perform searches in LDAP servers using filters as the message payload. This component uses standard JNDI ( javax.naming package) to access the server. 84.1. Dependencies When using ldap with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ldap-starter</artifactId> </dependency> 84.2. URI format The ldapServerBean in the URI refers to a DirContext bean in the registry. The LDAP component only supports producer endpoints, which means that an ldap URI cannot appear in the from at the start of a route. 84.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 84.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 84.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 84.4. Component Options The LDAP component supports 2 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 84.5. Endpoint Options The LDAP endpoint is configured using URI syntax: with the following path and query parameters: 84.5.1. Path Parameters (1 parameters) Name Description Default Type dirContextName (producer) Required Name of either a javax.naming.directory.DirContext, or java.util.Hashtable, or Map bean to lookup in the registry. If the bean is either a Hashtable or Map then a new javax.naming.directory.DirContext instance is created for each use. If the bean is a javax.naming.directory.DirContext then the bean is used as given. The latter may not be possible in all situations where the javax.naming.directory.DirContext must not be shared, and in those situations it can be better to use java.util.Hashtable or Map instead. String 84.5.2. Query Parameters (5 parameters) Name Description Default Type base (producer) The base DN for searches. ou=system String pageSize (producer) When specified the ldap module uses paging to retrieve all results (most LDAP Servers throw an exception when trying to retrieve more than 1000 entries in one query). To be able to use this a LdapContext (subclass of DirContext) has to be passed in as ldapServerBean (otherwise an exception is thrown). Integer returnedAttributes (producer) Comma-separated list of attributes that should be set in each entry of the result. String scope (producer) Specifies how deeply to search the tree of entries, starting at the base DN. Enum values: object onelevel subtree subtree String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 84.6. Result The result is returned to Out body as a List<javax.naming.directory.SearchResult> object. 84.7. DirContext The URI, ldap:ldapserver , references a Spring bean with the ID, ldapserver . The ldapserver bean may be defined as follows: <bean id="ldapserver" class="javax.naming.directory.InitialDirContext" scope="prototype"> <constructor-arg> <props> <prop key="java.naming.factory.initial">com.sun.jndi.ldap.LdapCtxFactory</prop> <prop key="java.naming.provider.url">ldap://localhost:10389</prop> <prop key="java.naming.security.authentication">none</prop> </props> </constructor-arg> </bean> The preceding example declares a regular Sun based LDAP DirContext that connects anonymously to a locally hosted LDAP server. Note DirContext objects are not required to support concurrency by contract. It is therefore important that the directory context is declared with the setting, scope="prototype" , in the bean definition or that the context supports concurrency. In the Spring framework, prototype scoped objects are instantiated each time they are looked up. 84.8. Security concerns related to LDAP injection Note The camel-ldap component uses the message body as filter the search results. Therefore, the message body should be protected from LDAP injection. To assist with this, you can use org.apache.camel.component.ldap.LdapHelper utility class that has method(s) to escape string values to be LDAP injection safe. See LDAP Injection for more information. 84.9. Samples Following on from the Spring configuration above, the code sample below sends an LDAP request to filter search a group for a member. The Common Name is then extracted from the response. ProducerTemplate template = exchange.getContext().createProducerTemplate(); Collection<SearchResult> results = template.requestBody( "ldap:ldapserver?base=ou=mygroup,ou=groups,ou=system", "(member=uid=huntc,ou=users,ou=system)", Collection.class); if (results.size() > 0) { // Extract what we need from the device's profile Iterator resultIter = results.iterator(); SearchResult searchResult = (SearchResult) resultIter.(); Attributes attributes = searchResult.getAttributes(); Attribute deviceCNAttr = attributes.get("cn"); String deviceCN = (String) deviceCNAttr.get(); // ... } If no specific filter is required - for example, you just need to look up a single entry - specify a wildcard filter expression. For example, if the LDAP entry has a Common Name, use a filter expression like: 84.9.1. Binding using credentials A Camel end user donated this sample code he used to bind to the ldap server using credentials. Properties props = new Properties(); props.setProperty(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); props.setProperty(Context.PROVIDER_URL, "ldap://localhost:389"); props.setProperty(Context.URL_PKG_PREFIXES, "com.sun.jndi.url"); props.setProperty(Context.REFERRAL, "ignore"); props.setProperty(Context.SECURITY_AUTHENTICATION, "simple"); props.setProperty(Context.SECURITY_PRINCIPAL, "cn=Manager"); props.setProperty(Context.SECURITY_CREDENTIALS, "secret"); DefaultRegistry reg = new DefaultRegistry(); reg.bind("myldap", new InitialLdapContext(props, null)); CamelContext context = new DefaultCamelContext(reg); context.addRoutes( new RouteBuilder() { @Override public void configure() throws Exception { from("direct:start").to("ldap:myldap?base=ou=test"); } } ); context.start(); ProducerTemplate template = context.createProducerTemplate(); Endpoint endpoint = context.getEndpoint("direct:start"); Exchange exchange = endpoint.createExchange(); exchange.getIn().setBody("(uid=test)"); Exchange out = template.send(endpoint, exchange); Collection<SearchResult> data = out.getMessage().getBody(Collection.class); assert data != null; assert !data.isEmpty(); System.out.println(out.getMessage().getBody()); context.stop(); 84.10. Configuring SSL All that is required is to create a custom socket factory and reference it in the InitialDirContext bean - see below sample. SSL Configuration <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <sslContextParameters xmlns="http://camel.apache.org/schema/spring" id="sslContextParameters" > <keyManagers keyPassword="{{keystore.pwd}}"> <keyStore resource="{{keystore.url}}" password="{{keystore.pwd}}"/> </keyManagers> </sslContextParameters> <bean id="customSocketFactory" class="com.example.ldap.CustomSocketFactory"> <constructor-arg index="0" ref="sslContextParameters"/> </bean> <bean id="ldapserver" class="javax.naming.directory.InitialDirContext" scope="prototype"> <constructor-arg> <props> <prop key="java.naming.factory.initial">com.sun.jndi.ldap.LdapCtxFactory</prop> <prop key="java.naming.provider.url">ldaps://127.0.0.1:10636</prop> <prop key="java.naming.security.protocol">ssl</prop> <prop key="java.naming.security.authentication">none</prop> <prop key="java.naming.ldap.factory.socket">com.example.ldap.CustomSocketFactory</prop> </props> </constructor-arg> </bean> </beans> Custom Socket Factory package com.example.ldap; import java.io.IOException; import java.net.InetAddress; import java.net.Socket; import java.security.KeyStore; import javax.net.SocketFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManagerFactory; import org.apache.camel.support.jsse.SSLContextParameters; /** * The CustomSocketFactory. Loads the KeyStore and creates an instance of SSLSocketFactory */ public class CustomSocketFactory extends SSLSocketFactory { private static SSLSocketFactory socketFactory; /** * Called by the getDefault() method. */ public CustomSocketFactory() { } /** * Called by Spring Boot DI to initialize an instance of SocketFactory */ public CustomSocketFactory(SSLContextParameters sslContextParameters) { try { KeyStore keyStore = sslContextParameters.getKeyManagers().getKeyStore().createKeyStore(); TrustManagerFactory tmf = TrustManagerFactory.getInstance("SunX509"); tmf.init(keyStore); SSLContext ctx = SSLContext.getInstance("TLS"); ctx.init(null, tmf.getTrustManagers(), null); socketFactory = ctx.getSocketFactory(); } catch (Exception ex) { ex.printStackTrace(System.err); } } /** * Getter for the SocketFactory */ public static SocketFactory getDefault() { return new CustomSocketFactory(); } @Override public String[] getDefaultCipherSuites() { return socketFactory.getDefaultCipherSuites(); } @Override public String[] getSupportedCipherSuites() { return socketFactory.getSupportedCipherSuites(); } @Override public Socket createSocket(Socket socket, String string, int i, boolean bln) throws IOException { return socketFactory.createSocket(socket, string, i, bln); } @Override public Socket createSocket(String string, int i) throws IOException { return socketFactory.createSocket(string, i); } @Override public Socket createSocket(String string, int i, InetAddress ia, int i1) throws IOException { return socketFactory.createSocket(string, i, ia, i1); } @Override public Socket createSocket(InetAddress ia, int i) throws IOException { return socketFactory.createSocket(ia, i); } @Override public Socket createSocket(InetAddress ia, int i, InetAddress ia1, int i1) throws IOException { return socketFactory.createSocket(ia, i, ia1, i1); } } 84.11. Spring Boot Auto-Configuration The component supports 3 options, which are listed below. Name Description Default Type camel.component.ldap.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.ldap.enabled Whether to enable auto configuration of the ldap component. This is enabled by default. Boolean camel.component.ldap.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-ldap-starter</artifactId> </dependency>", "ldap:ldapServerBean[?options]", "ldap:dirContextName", "<bean id=\"ldapserver\" class=\"javax.naming.directory.InitialDirContext\" scope=\"prototype\"> <constructor-arg> <props> <prop key=\"java.naming.factory.initial\">com.sun.jndi.ldap.LdapCtxFactory</prop> <prop key=\"java.naming.provider.url\">ldap://localhost:10389</prop> <prop key=\"java.naming.security.authentication\">none</prop> </props> </constructor-arg> </bean>", "ProducerTemplate template = exchange.getContext().createProducerTemplate(); Collection<SearchResult> results = template.requestBody( \"ldap:ldapserver?base=ou=mygroup,ou=groups,ou=system\", \"(member=uid=huntc,ou=users,ou=system)\", Collection.class); if (results.size() > 0) { // Extract what we need from the device's profile Iterator resultIter = results.iterator(); SearchResult searchResult = (SearchResult) resultIter.next(); Attributes attributes = searchResult.getAttributes(); Attribute deviceCNAttr = attributes.get(\"cn\"); String deviceCN = (String) deviceCNAttr.get(); // }", "(cn=*)", "Properties props = new Properties(); props.setProperty(Context.INITIAL_CONTEXT_FACTORY, \"com.sun.jndi.ldap.LdapCtxFactory\"); props.setProperty(Context.PROVIDER_URL, \"ldap://localhost:389\"); props.setProperty(Context.URL_PKG_PREFIXES, \"com.sun.jndi.url\"); props.setProperty(Context.REFERRAL, \"ignore\"); props.setProperty(Context.SECURITY_AUTHENTICATION, \"simple\"); props.setProperty(Context.SECURITY_PRINCIPAL, \"cn=Manager\"); props.setProperty(Context.SECURITY_CREDENTIALS, \"secret\"); DefaultRegistry reg = new DefaultRegistry(); reg.bind(\"myldap\", new InitialLdapContext(props, null)); CamelContext context = new DefaultCamelContext(reg); context.addRoutes( new RouteBuilder() { @Override public void configure() throws Exception { from(\"direct:start\").to(\"ldap:myldap?base=ou=test\"); } } ); context.start(); ProducerTemplate template = context.createProducerTemplate(); Endpoint endpoint = context.getEndpoint(\"direct:start\"); Exchange exchange = endpoint.createExchange(); exchange.getIn().setBody(\"(uid=test)\"); Exchange out = template.send(endpoint, exchange); Collection<SearchResult> data = out.getMessage().getBody(Collection.class); assert data != null; assert !data.isEmpty(); System.out.println(out.getMessage().getBody()); context.stop();", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:context=\"http://www.springframework.org/schema/context\" xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <sslContextParameters xmlns=\"http://camel.apache.org/schema/spring\" id=\"sslContextParameters\" > <keyManagers keyPassword=\"{{keystore.pwd}}\"> <keyStore resource=\"{{keystore.url}}\" password=\"{{keystore.pwd}}\"/> </keyManagers> </sslContextParameters> <bean id=\"customSocketFactory\" class=\"com.example.ldap.CustomSocketFactory\"> <constructor-arg index=\"0\" ref=\"sslContextParameters\"/> </bean> <bean id=\"ldapserver\" class=\"javax.naming.directory.InitialDirContext\" scope=\"prototype\"> <constructor-arg> <props> <prop key=\"java.naming.factory.initial\">com.sun.jndi.ldap.LdapCtxFactory</prop> <prop key=\"java.naming.provider.url\">ldaps://127.0.0.1:10636</prop> <prop key=\"java.naming.security.protocol\">ssl</prop> <prop key=\"java.naming.security.authentication\">none</prop> <prop key=\"java.naming.ldap.factory.socket\">com.example.ldap.CustomSocketFactory</prop> </props> </constructor-arg> </bean> </beans>", "package com.example.ldap; import java.io.IOException; import java.net.InetAddress; import java.net.Socket; import java.security.KeyStore; import javax.net.SocketFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManagerFactory; import org.apache.camel.support.jsse.SSLContextParameters; /** * The CustomSocketFactory. Loads the KeyStore and creates an instance of SSLSocketFactory */ public class CustomSocketFactory extends SSLSocketFactory { private static SSLSocketFactory socketFactory; /** * Called by the getDefault() method. */ public CustomSocketFactory() { } /** * Called by Spring Boot DI to initialize an instance of SocketFactory */ public CustomSocketFactory(SSLContextParameters sslContextParameters) { try { KeyStore keyStore = sslContextParameters.getKeyManagers().getKeyStore().createKeyStore(); TrustManagerFactory tmf = TrustManagerFactory.getInstance(\"SunX509\"); tmf.init(keyStore); SSLContext ctx = SSLContext.getInstance(\"TLS\"); ctx.init(null, tmf.getTrustManagers(), null); socketFactory = ctx.getSocketFactory(); } catch (Exception ex) { ex.printStackTrace(System.err); } } /** * Getter for the SocketFactory */ public static SocketFactory getDefault() { return new CustomSocketFactory(); } @Override public String[] getDefaultCipherSuites() { return socketFactory.getDefaultCipherSuites(); } @Override public String[] getSupportedCipherSuites() { return socketFactory.getSupportedCipherSuites(); } @Override public Socket createSocket(Socket socket, String string, int i, boolean bln) throws IOException { return socketFactory.createSocket(socket, string, i, bln); } @Override public Socket createSocket(String string, int i) throws IOException { return socketFactory.createSocket(string, i); } @Override public Socket createSocket(String string, int i, InetAddress ia, int i1) throws IOException { return socketFactory.createSocket(string, i, ia, i1); } @Override public Socket createSocket(InetAddress ia, int i) throws IOException { return socketFactory.createSocket(ia, i); } @Override public Socket createSocket(InetAddress ia, int i, InetAddress ia1, int i1) throws IOException { return socketFactory.createSocket(ia, i, ia1, i1); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-ldap-component-starter
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_eclipse_temurin_11.0.19/making-open-source-more-inclusive
Chapter 5. Container Images Based on Red Hat Software Collections 3.6
Chapter 5. Container Images Based on Red Hat Software Collections 3.6 Component Description Supported architectures Application Images rhscl/nodejs-14-rhel7 Node.js 14 platform for building and running applications x86_64, s390x, ppc64le rhscl/perl-530-rhel7 Perl 5.30 platform for building and running applications x86_64, s390x, ppc64le rhscl/php-73-rhel7 PHP 7.3 platform for building and running applications x86_64, s390x, ppc64le rhscl/ruby-25-rhel7 Ruby 2.5 platform for building and running applications (EOL) x86_64 Daemon Images rhscl/httpd-24-rhel7 Apache HTTP 2.4 Server x86_64, s390x, ppc64le rhscl/nginx-118-rhel7 nginx 1.18 server and a reverse proxy server (EOL) x86_64, s390x, ppc64le Legend: x86_64 - AMD64 and Intel 64 architectures s390x - 64-bit IBM Z ppc64le - IBM POWER, little endian All images are based on components from Red Hat Software Collections. The images are available for Red Hat Enterprise Linux 7 through the Red Hat Container Registry. For detailed information about components provided by Red Hat Software Collections 3.6, see the Red Hat Software Collections 3.6 Release Notes . For more information about the Red Hat Developer Toolset 10 components, see the Red Hat Developer Toolset 10 User Guide . For information regarding container images based on Red Hat Software Collections 2, see Using Red Hat Software Collections 2 Container Images . EOL images are no longer supported.
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/rhscl_3.6_images
1.5. Listing Installed Software Collections
1.5. Listing Installed Software Collections To get a list of Software Collections that are installed on the system, run the following command: scl --list To get a list of installed packages contained within a specified Software Collection, run the following command: scl --list software_collection_1
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-listing_installed_software_collections
Chapter 7. Assigning password administrator permissions
Chapter 7. Assigning password administrator permissions The Directory Manager can assign the password administrator role to a user or a group of users. Because password administrators need access control instructions (ACIs) with the appropriate permissions, Red Hat recommends that you configure a group to allow a single ACI set to manage all password administrators. Using the password administrator role is beneficial in the following scenarios: setting up an atribute that forces the user to change their password at the time of the login changing a user's password to a different storage scheme defined in the password policy Important A password administrator can perform any user password operations. When using a password administrator account or the Directory Manager (root DN) to set a password, password policies are bypassed and not verified. Do not use these accounts for regular user password management. Red Hat recommends performing ordinary password updates under an existing role in the database with permissions to update only the userPassword attribute. Note You can add a new passwordAdminSkipInfoUpdate: on/off setting under the cn=config entry to provide a fine grained control over password updates performed by password administrators. When you enable this setting, passwords updates do not update certain attributes, for example, passwordHistory , passwordExpirationTime , passwordRetryCount , pwdReset , and passwordExpWarned . 7.1. Assigning password administrator permissions in a global policy In a global policy, you can assign the password administrator role to a user or a group of users. Red Hat recommends that you configure a group to allow a single access control instruction (ACI) set to manage all password administrators. Prerequisites You have created a group named password_admins that includes all of the users to whom you want to assign the password administrator role. Procedure Create the ACI that defines the permissions for a password administrator role: Assign the password administrator role to the group: # dsconf -D " cn=Directory Manager " ldap://server.example.com pwpolicy set --pwdadmin " cn=password_admins,ou=groups,dc=example,dc=com " 7.2. Assigning password administrator permissions in a local policy In a local policy, you can assign the password administrator role to a user or a group of users. Red Hat recommends that you configure a group to allow a single access control instruction (ACI) set to manage all password administrators. Prerequisites You have created a group named password_admins that includes all of the users to whom you want to assign the password administrator role. Procedure Create the ACI that defines the permissions for a password administrator role: Assign the password administrator role to the group: # dsconf -D " cn=Directory Manager " ldap://server.example.com localpwp set ou=people,dc=example,dc=com --pwdadmin " cn=password_admins,ou=groups,dc=example,dc=com " 7.3. Additional resources Backup up Directory Server (change it)
[ "ldapmodify -D \" cn=Directory Manager \" -W -p 389 -h server.example.com -x << EOF dn: ou=people,dc=example,dc=com changetype: modify add: aci aci: (targetattr=\"userPassword || nsAccountLock || userCertificate || nsSshPublicKey\")(targetfilter=\"(objectClass=nsAccount)\")(version 3.0; acl \"Enable user password reset\"; allow (write, read)(groupdn=\" ldap:///cn=password_admins,ou=groups,dc=example,dc=com \");) EOF", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com pwpolicy set --pwdadmin \" cn=password_admins,ou=groups,dc=example,dc=com \"", "ldapmodify -D \" cn=Directory Manager \" -W -p 389 -h server.example.com -x << EOF dn: ou=people,dc=example,dc=com changetype: modify add: aci aci: (targetattr=\"userPassword || nsAccountLock || userCertificate || nsSshPublicKey\")(targetfilter=\"(objectClass=nsAccount)\")(version 3.0; acl \"Enable user password reset\"; allow (write, read)(groupdn=\" ldap:///cn=password_admins,ou=groups,dc=example,dc=com \");) EOF", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com localpwp set ou=people,dc=example,dc=com --pwdadmin \" cn=password_admins,ou=groups,dc=example,dc=com \"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/user_management_and_authentication/assembly_assigning-password-administrator-permissions
B.4. Specifying Directory Entries Using LDIF
B.4. Specifying Directory Entries Using LDIF Many types of entries can be stored in the directory. This section concentrates on three of the most common types of entries used in a directory: domain, organizational unit, and organizational person entries. The object classes defined for an entry are what indicate whether the entry represents a domain or domain component, an organizational unit, an organizational person, or some other type of entry. For a complete list of the object classes that can be used by default in the directory and a list of the most commonly used attributes, see the Red Hat Directory Server 11 Configuration, Command, and File Reference . B.4.1. Specifying Domain Entries Directories often have at least one domain entry. Typically this is the first, or topmost, entry in the directory. The domain entry often corresponds to the DNS host and domain name for your directory. For example, if the Directory Server host is called ldap.example.com , then the domain entry for the directory is probably named dc=ldap,dc=example,dc=com or simply dc=example,dc=com . The LDIF entry used to define a domain appears as follows: The following is a sample domain entry in LDIF format: Each element of the LDIF-formatted domain entry is defined in Table B.2, "LDIF Elements in Domain Entries" . Table B.2. LDIF Elements in Domain Entries LDIF Element Description dn: distinguished_name Required. Specifies the distinguished name for the entry. objectClass: top Required. Specifies the top object class. objectClass: domain Specifies the domain object class. This line defines the entry as a domain or domain component. See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes that can be used with this object class. --> dc: domain_component Attribute that specifies the domain's name. The server is typically configured during the initial setup to have a suffix or naming context in the form dc= hostname, dc= domain, dc= toplevel . For example, dc=ldap,dc=example,dc=com . The domain entry should use the leftmost dc value, such as dc: ldap . If the suffix were dc=example,dc=com , the dc value is dc: example . Do not create the entry for dn: dc=com unless the server has been configured to use that suffix. list_of_attributes Specifies the list of optional attributes to maintain for the entry. See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes that can be used with this object class. B.4.2. Specifying Organizational Unit Entries Organizational unit entries are often used to represent major branch points, or subdirectories, in the directory tree. They correspond to major, reasonably static entities within the enterprise, such as a subtree that contains people or a subtree that contains groups. The organizational unit attribute that is contained in the entry may also represent a major organization within the company, such as marketing or engineering. However, this style is discouraged. Red Hat strongly encourages using a flat directory tree. There is usually more than one organizational unit, or branch point, within a directory tree. The LDIF that defines an organizational unit entry must appear as follows: The following is a sample organizational unit entry in LDIF format: Table B.3, "LDIF Elements in Organizational Unit Entries" defines each element of the LDIF-formatted organizational unit entry. Table B.3. LDIF Elements in Organizational Unit Entries LDIF Element Description dn: distinguished_name Specifies the distinguished name for the entry. A DN is required. If there is a comma in the DN, the comma must be escaped with a backslash (\), such as dn: ou=people,dc=example,dc=com . objectClass: top Required. Specifies the top object class. objectClass: organizationalUnit Specifies the organizationalUnit object class. This line defines the entry as an organizational unit . See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes available for this object class. ou: organizational_unit_name Attribute that specifies the organizational unit's name. list_of_attributes Specifies the list of optional attributes to maintain for the entry. See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes available for this object class. B.4.3. Specifying Organizational Person Entries The majority of the entries in the directory represent organizational people. In LDIF, the definition of an organizational person is as follows: The following is an example organizational person entry in LDIF format: Table B.4, "LDIF Elements in Person Entries" defines each aspect of the LDIF person entry. Table B.4. LDIF Elements in Person Entries LDIF Element Description dn: distinguished_name Required. Specifies the distinguished name for the entry. For example, dn: uid=bjensen,ou=people,dc=example,dc=com . If there is a comma in the DN, the comma must be escaped with a backslash (\). objectClass: top Required. Specifies the top object class. objectClass: person Specifies the person object class. This object class specification should be included because many LDAP clients require it during search operations for a person or an organizational person. objectClass: organizationalPerson Specifies the organizationalPerson object class. This object class specification should be included because some LDAP clients require it during search operations for an organizational person. objectClass: inetOrgPerson Specifies the inetOrgPerson object class. The inetOrgPerson object class is recommended for the creation of an organizational person entry because this object class includes the widest range of attributes. The uid attribute is required by this object class, and entries that contain this object class are named based on the value of the uid attribute. See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes available for this object class. cn: common_name Specifies the person's common name, which is the full name commonly used by the person. For example, cn: Bill Anderson . At least one common name is required. sn: surname Specifies the person's surname, or last name. For example, sn: Anderson . A surname is required. list_of_attributes Specifies the list of optional attributes to maintain for the entry. See the Red Hat Directory Server 11 Configuration, Command, and File Reference for a list of the attributes available for this object class.
[ "dn: distinguished_name objectClass: top objectClass: domain dc: domain_component_name list_of_optional_attributes", "dn: dc=example,dc=com objectclass: top objectclass: domain dc: example description: Fictional example company", "dn: distinguished_name objectClass: top objectClass: organizationalUnit ou: organizational_unit_name list_of_optional_attributes", "dn: ou=people,dc=example,dc=com objectclass: top objectclass: organizationalUnit ou: people description: Fictional example organizational unit", "dn: distinguished_name objectClass: top objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson cn: common_name sn: surname list_of_optional_attributes", "dn: uid=bjensen,ou=people,dc=example,dc=com objectclass: top objectclass: person objectclass: organizationalPerson objectclass: inetOrgPerson cn: Babs Jensen sn: Jensen givenname: Babs uid: bjensen ou: people description: Fictional example person telephoneNumber: 555-5557 userPassword: {SSHA}dkfljlk34r2kljdsfk9" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/LDAP_Data_Interchange_Format-Specifying_Directory_Entries_Using_LDIF
5.332. thunderbird
5.332. thunderbird 5.332.1. RHSA-2012:1351 - Critical: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2012-3982 , CVE-2012-3988 , CVE-2012-3990 , CVE-2012-3995 , CVE-2012-4179 , CVE-2012-4180 , CVE-2012-4181 , CVE-2012-4182 , CVE-2012-4183 , CVE-2012-4185 , CVE-2012-4186 , CVE-2012-4187 , CVE-2012-4188 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-3986 , CVE-2012-3991 Two flaws in Thunderbird could allow malicious content to bypass intended restrictions, possibly leading to information disclosure, or Thunderbird executing arbitrary code. Note that the information disclosure issue could possibly be combined with other flaws to achieve arbitrary code execution. CVE-2012-1956 , CVE-2012-3992 , CVE-2012-3994 Multiple flaws were found in the location object implementation in Thunderbird. Malicious content could be used to perform cross-site scripting attacks, script injection, or spoofing attacks. CVE-2012-3993 , CVE-2012-4184 Two flaws were found in the way Chrome Object Wrappers were implemented. Malicious content could be used to perform cross-site scripting attacks or cause Thunderbird to execute arbitrary code. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Christian Holler, Jesse Ruderman, Soroush Dalili, miaubiz, Abhishek Arya, Atte Kettunen, Johnny Stenback, Alice White, moz_bug_r_a4, and Mariusz Mlynski as the original reporters of these issues. Note: None of the issues in this advisory can be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 10.0.8 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 5.332.2. RHSA-2012:1362 - Critical: thunderbird security update An updated thunderbird package that fixes one security issue is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fix CVE-2012-4193 A flaw was found in the way Thunderbird handled security wrappers. Malicious content could cause Thunderbird to execute arbitrary code with the privileges of the user running Thunderbird. Red Hat would like to thank the Mozilla project for reporting this issue. Upstream acknowledges moz_bug_r_a4 as the original reporter. Note This issue cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. It could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which corrects this issue. After installing the update, Thunderbird must be restarted for the changes to take effect. 5.332.3. RHSA-2012:1413 - Important: thunderbird security update An updated thunderbird package that fixes multiple security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fix CVE-2012-4194 , CVE-2012-4195 , CVE-2012-4196 Multiple flaws were found in the location object implementation in Thunderbird. Malicious content could be used to perform cross-site scripting attacks, bypass the same-origin policy, or cause Thunderbird to execute arbitrary code. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Mariusz Mlynski, moz_bug_r_a4, and Antoine Delignat-Lavaud as the original reporters of these issues. Note None of the issues in this advisory can be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 10.0.10 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 5.332.4. RHSA-2012:1089 - Critical: thunderbird security update An updated thunderbird package that fixes multiple security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2012-1948 , CVE-2012-1951 , CVE-2012-1952 , CVE-2012-1953 , CVE-2012-1954 , CVE-2012-1958 , CVE-2012-1962 , CVE-2012-1967 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-1959 Malicious content could bypass same-compartment security wrappers (SCSW) and execute arbitrary code with chrome privileges. CVE-2012-1955 A flaw in the way Thunderbird called history.forward and history.back could allow an attacker to conceal a malicious URL, possibly tricking a user into believing they are viewing trusted content. CVE-2012-1957 A flaw in a parser utility class used by Thunderbird to parse feeds (such as RSS) could allow an attacker to execute arbitrary JavaScript with the privileges of the user running Thunderbird. This issue could have affected other Thunderbird components or add-ons that assume the class returns sanitized input. CVE-2012-1961 A flaw in the way Thunderbird handled X-Frame-Options headers could allow malicious content to perform a clickjacking attack. CVE-2012-1963 A flaw in the way Content Security Policy (CSP) reports were generated by Thunderbird could allow malicious content to steal a victim's OAuth 2.0 access tokens and OpenID credentials. CVE-2012-1964 A flaw in the way Thunderbird handled certificate warnings could allow a man-in-the-middle attacker to create a crafted warning, possibly tricking a user into accepting an arbitrary certificate as trusted. The nss update RHBA-2012:0337 for Red Hat Enterprise Linux 5 and 6 introduced a mitigation for the CVE-2011-3389 flaw. For compatibility reasons, it remains disabled by default in the nss packages. This update makes Thunderbird enable the mitigation by default. It can be disabled by setting the NSS_SSL_CBC_RANDOM_IV environment variable to 0 before launching Thunderbird. (BZ# 838879 ) Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Benoit Jacob, Jesse Ruderman, Christian Holler, Bill McCloskey, Abhishek Arya, Arthur Gerkis, Bill Keese, moz_bug_r_a4, Bobby Holley, Mariusz Mlynski, Mario Heiderich, Frederic Buclin, Karthikeyan Bhargavan, and Matt McCutchen as the original reporters of these issues. Note: None of the issues in this advisory can be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 10.0.6 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 5.332.5. RHSA-2013:0145 - Critical: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2013-0744 , CVE-2013-0746 , CVE-2013-0750 , CVE-2013-0753 , CVE-2013-0754 , CVE-2013-0762 , CVE-2013-0766 , CVE-2013-0767 , CVE-2013-0769 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-0758 A flaw was found in the way Chrome Object Wrappers were implemented. Malicious content could be used to cause Thunderbird to execute arbitrary code via plug-ins installed in Thunderbird. CVE-2013-0759 A flaw in the way Thunderbird displayed URL values could allow malicious content or a user to perform a phishing attack. CVE-2013-0748 An information disclosure flaw was found in the way certain JavaScript functions were implemented in Thunderbird. An attacker could use this flaw to bypass Address Space Layout Randomization (ASLR) and other security restrictions. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Atte Kettunen, Boris Zbarsky, pa_kt, regenrecht, Abhishek Arya, Christoph Diehl, Christian Holler, Mats Palmgren, Chiaki Ishikawa, Mariusz Mlynski, Masato Kinugawa, and Jesse Ruderman as the original reporters of these issues. Note All issues except CVE-2013-0744, CVE-2013-0753, and CVE-2013-0754 cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 10.0.12 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 5.332.6. RHSA-2012:1483 - Critical: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2012-4214 , CVE-2012-4215 , CVE-2012-4216 , CVE-2012-5829 , CVE-2012-5830 , CVE-2012-5833 , CVE-2012-5835 , CVE-2012-5839 , CVE-2012-5840 , CVE-2012-5842 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-4202 A buffer overflow flaw was found in the way Thunderbird handled GIF (Graphics Interchange Format) images. Content containing a malicious GIF image could cause Thunderbird to crash or, possibly, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-4207 A flaw was found in the way Thunderbird decoded the HZ-GB-2312 character encoding. Malicious content could cause Thunderbird to run JavaScript code with the permissions of different content. CVE-2012-4209 A flaw was found in the location object implementation in Thunderbird. Malicious content could possibly use this flaw to allow restricted content to be loaded by plug-ins. CVE-2012-5841 A flaw was found in the way cross-origin wrappers were implemented. Malicious content could use this flaw to perform cross-site scripting attacks. CVE-2012-4201 A flaw was found in the evalInSandbox implementation in Thunderbird. Malicious content could use this flaw to perform cross-site scripting attacks. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Abhishek Arya, miaubiz, Jesse Ruderman, Andrew McCreight, Bob Clary, Kyle Huey, Atte Kettunen, Masato Kinugawa, Mariusz Mlynski, Bobby Holley, and moz_bug_r_a4 as the original reporters of these issues. Note All issues except CVE-2012-4202 cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 10.0.11 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 5.332.7. RHSA-2012:1211 - Critical: thunderbird security update An updated thunderbird package that fixes multiple security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2012-1970 , CVE-2012-1972 , CVE-2012-1973 , CVE-2012-1974 , CVE-2012-1975 , CVE-2012-1976 , CVE-2012-3956 , CVE-2012-3957 , CVE-2012-3958 , CVE-2012-3959 , CVE-2012-3960 , CVE-2012-3961 , CVE-2012-3962 , CVE-2012-3963 , CVE-2012-3964 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-3969 , CVE-2012-3970 Content containing a malicious Scalable Vector Graphics (SVG) image file could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-3967 , CVE-2012-3968 Two flaws were found in the way Thunderbird rendered certain images using WebGL. Malicious content could cause Thunderbird to crash or, under certain conditions, possibly execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-3966 A flaw was found in the way Thunderbird decoded embedded bitmap images in Icon Format (ICO) files. Content containing a malicious ICO file could cause Thunderbird to crash or, under certain conditions, possibly execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-3980 A flaw was found in the way the "eval" command was handled by the Thunderbird Error Console. Running "eval" in the Error Console while viewing malicious content could possibly cause Thunderbird to execute arbitrary code with the privileges of the user running Thunderbird. CVE-2012-3972 An out-of-bounds memory read flaw was found in the way Thunderbird used the format-number feature of XSLT (Extensible Stylesheet Language Transformations). Malicious content could possibly cause an information leak, or cause Thunderbird to crash. CVE-2012-3978 A flaw was found in the location object implementation in Thunderbird. Malicious content could use this flaw to possibly allow restricted content to be loaded. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Gary Kwong, Christian Holler, Jesse Ruderman, John Schoenick, Vladimir Vukicevic, Daniel Holbert, Abhishek Arya, Frederic Hoguin, miaubiz, Arthur Gerkis, Nicolas Gregoire, moz_bug_r_a4, and Colby Russell as the original reporters of these issues. Note: All issues except CVE-2012-3969 and CVE-2012-3970 cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 10.0.7 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect. 5.332.8. RHSA-2013:0272 - Critical: thunderbird security update An updated thunderbird package that fixes several security issues is now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Mozilla Thunderbird is a standalone mail and newsgroup client. Security Fixes CVE-2013-0775 , CVE-2013-0780 , CVE-2013-0782 , CVE-2013-0783 Several flaws were found in the processing of malformed content. Malicious content could cause Thunderbird to crash or, potentially, execute arbitrary code with the privileges of the user running Thunderbird. CVE-2013-0776 It was found that, after canceling a proxy server's authentication prompt, the address bar continued to show the requested site's address. An attacker could use this flaw to conduct phishing attacks by tricking a user into believing they are viewing trusted content. Red Hat would like to thank the Mozilla project for reporting these issues. Upstream acknowledges Nils, Abhishek Arya, Olli Pettay, Christoph Diehl, Gary Kwong, Jesse Ruderman, Andrew McCreight, Joe Drew, Wayne Mery, and Michal Zalewski as the original reporters of these issues. Note All issues cannot be exploited by a specially-crafted HTML mail message as JavaScript is disabled by default for mail messages. They could be exploited another way in Thunderbird, for example, when viewing the full remote content of an RSS feed. Important This erratum upgrades Thunderbird to version 17.0.3 ESR. Thunderbird 17 is not completely backwards-compatible with all Mozilla add-ons and Thunderbird plug-ins that worked with Thunderbird 10.0. Thunderbird 17 checks compatibility on first-launch, and, depending on the individual configuration and the installed add-ons and plug-ins, may disable said Add-ons and plug-ins, or attempt to check for updates and upgrade them. Add-ons and plug-ins may have to be manually updated. All Thunderbird users should upgrade to this updated package, which contains Thunderbird version 17.0.3 ESR, which corrects these issues. After installing the update, Thunderbird must be restarted for the changes to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/thunderbird
5.5. Configure CXF for a Web Service Data Source: Transport Settings
5.5. Configure CXF for a Web Service Data Source: Transport Settings CXF configuration can also control low level aspects of the HTTP transport. Prerequisites The web service data source must be configured and the ConfigFile and EndPointName properties must be configured for CXF. Procedure 5.4. Configure CXF for a Web Service Data Source: Transport Settings Open the CXF configuration file for the web service data source and add your desired transport properties. The following is an example of a CXF configuration file for a web service data source that disables hostname verification: Warning disableCNcheck=true must NOT be used in production.
[ "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:jaxws=\"http://cxf.apache.org/jaxws\" xsi:schemaLocation=\"http://cxf.apache.org/transports/http/configuration http://cxf.apache.org/schemas/configuration/http-conf.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd\"> <http-conf:conduit name=\"{http://teiid.org}teiid.http-conduit\"> <http-conf:tlsClientParameters disableCNcheck=\"tru\" /> </http-conf:conduit> </beans>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/configure_cxf_for_a_web_service_data_source_transport_settings
1.3. Virtualization Performance Features and Improvements
1.3. Virtualization Performance Features and Improvements Virtualization Performance Improvements in Red Hat Enterprise Linux 7 The following features improve virtualization performance in Red Hat Enterprise Linux 7: Automatic NUMA Balancing Automatic NUMA balancing improves the performance of applications running on NUMA hardware systems, without any manual tuning required for Red Hat Enterprise Linux 7 guests. Automatic NUMA balancing moves tasks, which can be threads or processes, closer to the memory they are accessing. This enables good performance with zero configuration. However, in some circumstances, providing more accurate guest configuration or setting up guest to host affinities for CPUs and memory may provide better results. For more information on automatic NUMA balancing, see Section 9.2, "Automatic NUMA Balancing" . VirtIO models Any virtual hardware that has the virtio model does not have the overhead of emulating the hardware with all its particularities. VirtIO devices have low overhead thanks to the fact that they are designed specifically for use in Virtualization environments. However not all guest operating systems support such models. Multi-queue virtio-net A networking approach that enables packet sending/receiving processing to scale with the number of available vCPUs of the guest. For more information on multi-queue virtio-net, see Section 5.4.2, "Multi-Queue virtio-net" . Bridge Zero Copy Transmit Zero copy transmit mode reduces the host CPU overhead in transmitting large packets between a guest network and an external network by up to 15%, without affecting throughput. Bridge zero copy transmit is fully supported on Red Hat Enterprise Linux 7 virtual machines, but disabled by default. For more information on zero copy transmit, see Section 5.4.1, "Bridge Zero Copy Transmit" . APIC Virtualization (APICv) Newer Intel processors offer hardware virtualization of the Advanced Programmable Interrupt Controller (APICv). APICv improves virtualized AMD64 and Intel 64 guest performance by allowing the guest to directly access the APIC, dramatically cutting down interrupt latencies and the number of virtual machine exits caused by the APIC. This feature is used by default in newer Intel processors and improves I/O performance. EOI Acceleration End-of-interrupt acceleration for high bandwidth I/O on older chipsets without virtual APIC capabilities. Multi-queue virtio-scsi Improved storage performance and scalability provided by multi-queue support in the virtio-scsi driver. This enables each virtual CPU to have a separate queue and interrupt to use without affecting other vCPUs. For more information on multi-queue virtio-scsi, see Section 7.4.2, "Multi-Queue virtio-scsi" . Paravirtualized Ticketlocks Paravirtualized ticketlocks (pvticketlocks) improve the performance of Red Hat Enterprise Linux 7 guest virtual machines running on Red Hat Enterprise Linux 7 hosts with oversubscribed CPUs. Paravirtualized Page Faults Paravirtualized page faults are injected into a guest when it attempts to access a page swapped out by the host. This improves KVM guest performance when host memory is overcommitted and guest memory is swapped out. Paravirtualized Time vsyscall Optimization The gettimeofday and clock_gettime system calls execute in the user space through the vsyscall mechanism. Previously, issuing these system calls required the system to switch into kernel mode, and then back into the user space. This greatly improves performance for some applications. Virtualization Performance Features in Red Hat Enterprise Linux CPU/Kernel NUMA - Non-Uniform Memory Access. See Chapter 9, NUMA for details on NUMA. CFS - Completely Fair Scheduler. A modern class-focused scheduler. RCU - Read Copy Update. Better handling of shared thread data. Up to 160 virtual CPUs (vCPUs). Memory huge pages and other optimizations for memory-intensive environments. See Chapter 8, Memory for details. Networking vhost-net - A fast, kernel-based VirtIO solution. SR-IOV - For near-native networking performance levels. Block I/O AIO - Support for a thread to overlap other I/O operations. MSI - PCI bus device interrupt generation. Disk I/O throttling - Controls on guest disk I/O requests to prevent over-utilizing host resources. See Section 7.4.1, "Disk I/O Throttling" for details. Note For more details on virtualization support, limits, and features, see the Red Hat Enterprise Linux 7 Virtualization Getting Started Guide and the following URLs: https://access.redhat.com/certified-hypervisors https://access.redhat.com/articles/rhel-kvm-limits
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-introduction-7_improvements
25.4. Fibre Channel
25.4. Fibre Channel This section discusses the Fibre Channel API, native Red Hat Enterprise Linux 7 Fibre Channel drivers, and the Fibre Channel capabilities of these drivers. 25.4.1. Fibre Channel API Following is a list of /sys/class/ directories that contain files used to provide the userspace API. In each item, host numbers are designated by H , bus numbers are B , targets are T , logical unit numbers (LUNs) are L , and remote port numbers are R . Important If your system is using multipath software, Red Hat recommends that you consult your hardware vendor before changing any of the values described in this section. Transport: /sys/class/fc_transport/target H : B : T / port_id - 24-bit port ID/address node_name - 64-bit node name port_name - 64-bit port name Remote Port: /sys/class/fc_remote_ports/rport- H : B - R / port_id node_name port_name dev_loss_tmo : controls when the scsi device gets removed from the system. After dev_loss_tmo triggers, the scsi device is removed. In multipath.conf , you can set dev_loss_tmo to infinity , which sets its value to 2,147,483,647 seconds, or 68 years, and is the maximum dev_loss_tmo value. In Red Hat Enterprise Linux 7, if you do not set the fast_io_fail_tmo option, dev_loss_tmo is capped to 600 seconds. By default, fast_io_fail_tmo is set to 5 seconds in Red Hat Enterprise Linux 7 if the multipathd service is running; otherwise, it is set to off . fast_io_fail_tmo : specifies the number of seconds to wait before it marks a link as "bad". Once a link is marked bad, existing running I/O or any new I/O on its corresponding path fails. If I/O is in a blocked queue, it will not be failed until dev_loss_tmo expires and the queue is unblocked. If fast_io_fail_tmo is set to any value except off , dev_loss_tmo is uncapped. If fast_io_fail_tmo is set to off , no I/O fails until the device is removed from the system. If fast_io_fail_tmo is set to a number, I/O fails immediately when the fast_io_fail_tmo timeout triggers. Host: /sys/class/fc_host/host H / port_id issue_lip : instructs the driver to rediscover remote ports. 25.4.2. Native Fibre Channel Drivers and Capabilities Red Hat Enterprise Linux 7 ships with the following native Fibre Channel drivers: lpfc qla2xxx zfcp bfa Important The qla2xxx driver runs in initiator mode by default. To use qla2xxx with Linux-IO, enable Fibre Channel target mode with the corresponding qlini_mode module parameter. First, make sure that the firmware package for your qla device, such as ql2200-firmware or similar, is installed. To enable target mode, add the following parameter to the /usr/lib/modprobe.d/qla2xxx.conf qla2xxx module configuration file: Then, use the dracut -f command to rebuild the initial ramdisk ( initrd ), and reboot the system for the changes to take effect. Table 25.1, "Fibre Channel API Capabilities" describes the different Fibre Channel API capabilities of each native Red Hat Enterprise Linux 7 driver. X denotes support for the capability. Table 25.1. Fibre Channel API Capabilities lpfc qla2xxx zfcp bfa Transport port_id X X X X Transport node_name X X X X Transport port_name X X X X Remote Port dev_loss_tmo X X X X Remote Port fast_io_fail_tmo X X [a] X [b] X Host port_id X X X X Host issue_lip X X X [a] Supported as of Red Hat Enterprise Linux 5.4 [b] Supported as of Red Hat Enterprise Linux 6.0
[ "options qla2xxx qlini_mode=disabled" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-fibrechanel
System-Level Authentication Guide
System-Level Authentication Guide Red Hat Enterprise Linux 7 Using applications and services to configure authentication on local systems Florian Delehaye Red Hat Customer Content Services [email protected] Marc Muehlfeld Red Hat Customer Content Services Filip Hanzelka Red Hat Customer Content Services Lucie Manaskova Red Hat Customer Content Services Aneta Steflova Petrova Red Hat Customer Content Services Tomas Capek Red Hat Customer Content Services Ella Deon Ballard Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/index
34.2.3. Viewing Pending Jobs
34.2.3. Viewing Pending Jobs To view pending at and batch jobs, use the atq command. The atq command displays a list of pending jobs, with each job on a line. Each line follows the job number, date, hour, job class, and username format. Users can only view their own jobs. If the root user executes the atq command, all jobs for all users are displayed.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/at_and_batch-viewing_pending_jobs
Linux Domain Identity, Authentication, and Policy Guide
Linux Domain Identity, Authentication, and Policy Guide Red Hat Enterprise Linux 7 Using Red Hat Identity Management in Linux environments Florian Delehaye Red Hat Customer Content Services [email protected] Marc Muehlfeld Red Hat Customer Content Services Filip Hanzelka Red Hat Customer Content Services Lucie Manaskova Red Hat Customer Content Services Aneta Steflova Petrova Red Hat Customer Content Services Tomas Capek Red Hat Customer Content Services Ella Deon Ballard Red Hat Customer Content Services
[ "lookup_family_order = ipv4_only", "hostname server.example.com", "ip addr show 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:1a:4a:10:4e:33 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1 /24 brd 192.0.2.255 scope global dynamic eth0 valid_lft 106694sec preferred_lft 106694sec inet6 2001:DB8::1111 /32 scope global dynamic valid_lft 2591521sec preferred_lft 604321sec inet6 fe80::56ee:75ff:fe2b:def6/64 scope link valid_lft forever preferred_lft forever", "dig +short server.example.com A 192.0.2.1", "dig +short server.example.com AAAA 2001:DB8::1111", "dig +short -x 192.0.2.1 server.example.com", "dig +short -x 2001:DB8::1111 server.example.com", "dig +dnssec @ IP_address_of_the_DNS_forwarder . SOA", ";; ->>HEADER<<- opcode: QUERY, status: NOERROR , id: 48655 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; ANSWER SECTION: . 31679 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2015100701 1800 900 604800 86400 . 31679 IN RRSIG SOA 8 0 86400 20151017170000 20151007160000 62530 . GNVz7SQs [...]", "127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.0.2.1 server.example.com server 2001:DB8::1111 server.example.com server", "systemctl status firewalld.service", "systemctl start firewalld.service systemctl enable firewalld.service", "firewall-cmd --permanent --add-port={80/tcp,443/tcp, list_of_ports }", "firewall-cmd --permanent --add-service={freeipa-ldap, list_of_services }", "firewall-cmd --reload", "firewall-cmd --runtime-to-permanent --add-port={80/tcp,443/tcp,389/tcp,636/tcp,88/tcp,88/udp,464/tcp,464/udp,53/tcp,53/udp,123/udp}", "yum install ipa-server", "yum install ipa-server ipa-server-dns", "dig @ IP_address +norecurse +short ipa.example.com. NS", "acl authorized { 192.0.2.0/24 ; 198.51.100.0/24 ; }; options { allow-query { any; }; allow-recursion { authorized ; }; };", "ipa-server-install --auto-reverse --allow-zone-overlap", "ipa-server-install", "Do you want to configure integrated DNS (BIND)? [no]: yes", "Server host name [server.example.com]: Please confirm the domain name [example.com]: Please provide a realm name [EXAMPLE.COM]:", "Directory Manager password: IPA admin password:", "Do you want to configure DNS forwarders? [yes]:", "Do you want to search for missing reverse zones? [yes]:", "Do you want to create reverse zone for IP 192.0.2.1 [yes]: Please specify the reverse zone name [2.0.192.in-addr.arpa.]: Using reverse zone(s) 2.0.192.in-addr.arpa.", "Continue to configure the system with these values? [no]: yes", "kinit admin", "ipa user-find admin -------------- 1 user matched -------------- User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash UID: 939000000 GID: 939000000 Account disabled: False Password: True Kerberos keys available: True ---------------------------- Number of entries returned 1 ----------------------------", "ipa-server-install", "Do you want to configure integrated DNS (BIND)? [no]:", "Server host name [server.example.com]: Please confirm the domain name [example.com]: Please provide a realm name [EXAMPLE.COM]:", "Directory Manager password: IPA admin password:", "Continue to configure the system with these values? [no]: yes", "Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server", "kinit admin", "ipa user-find admin -------------- 1 user matched -------------- User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash UID: 939000000 GID: 939000000 Account disabled: False Password: True Kerberos keys available: True ---------------------------- Number of entries returned 1 ----------------------------", "Configuring certificate server (pki-tomcatd): Estimated time 3 minutes 30 seconds [1/8]: creating certificate server user [2/8]: configuring certificate server instance The next step is to get /root/ipa.csr signed by your CA and re-run /sbin/ipa-server-install as: /sbin/ipa-server-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate", "ipa-server-install --external-cert-file= /tmp/servercert20110601.pem --external-cert-file= /tmp/cacert.pem", "ipa : CRITICAL failed to configure ca instance Command '/usr/sbin/pkispawn -s CA -f /tmp/ configuration_file ' returned non-zero exit status 1 Configuration of CA failed", "ipa-server-install --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret --dirsrv-cert-file /tmp/server.crt --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --ca-cert-file ca.crt", "ipa-server-install --realm EXAMPLE.COM --ds-password DM_password --admin-password admin_password --unattended", "Restarting the KDC Please add records in this file to your DNS system: /tmp/ipa.system.records.UFRBto.db Restarting the web server", "kinit admin", "ipa user-find admin -------------- 1 user matched -------------- User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash UID: 939000000 GID: 939000000 Account disabled: False Password: True Kerberos keys available: True ---------------------------- Number of entries returned 1 ----------------------------", "ipa server-del server.example.com", "ipa-server-install --uninstall", "ipactl stop", "yum install ipa-client", "Client hostname: client.example.com Realm: EXAMPLE.COM DNS Domain: example.com IPA Server: server.example.com BaseDN: dc=example,dc=com Continue to configure the system with these values? [no]: yes", "User authorized to enroll computers: admin Password for [email protected]", "Client configuration complete.", "kinit admin", "ipa host-add client.example.com --random -------------------------------------------------- Added host \"client.example.com\" -------------------------------------------------- Host name: client.example.com Random password: W5YpARl=7M.n Password: True Keytab: False Managed by: server.example.com", "ipa-client-install --password 'W5YpARl=7M.n' --domain example.com --server server.example.com --unattended", "kinit admin", "ipa host-add client.example.com --password= secret", "%packages @ X Window System @ Desktop @ Sound and Video ipa-client", "%post --log=/root/ks-post.log Generate SSH keys to ensure that ipa-client-install uploads them to the IdM server /usr/sbin/sshd-keygen Run the client install script /usr/sbin/ipa-client-install --hostname= client.example.com --domain= EXAMPLE.COM --enable-dns-updates --mkhomedir -w secret --realm= EXAMPLE.COM --server= server.example.com", "env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null getcert list env DBUS_SYSTEM_BUS_ADDRESS=unix:path=/dev/null ipa-client-install", "BASE dc=example,dc=com URI ldap://ldap.example.com #URI ldaps://server.example.com # modified by IPA #BASE dc=ipa,dc=example,dc=com # modified by IPA", "[user@client ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)", "ipa-client-install --uninstall", "ipa-client-install --force-join", "User authorized to enroll computers: admin Password for [email protected]", "ipa-client-install --keytab /tmp/krb5.keytab", "ipa service-find client.example.com", "ipa hostgroup-find client.example.com", "ipa-rmkeytab -k /path/to/keytab -r EXAMPLE.COM", "ipa host-del client.example.com", "ipa service-add service_name/new_host_name", "kinit admin", "ipa-replica-install --principal admin --admin-password admin_password", "ipa-replica-install --principal admin --admin-password admin_password", "kinit admin", "ipa hostgroup-add-member ipaservers --hosts client.example.com Host-group: ipaservers Description: IPA server hosts Member hosts: server.example.com, client.example.com ------------------------- Number of members added 1 -------------------------", "ipa-replica-install", "kinit admin", "ipa host-add client.example.com --random -------------------------------------------------- Added host \"client.example.com\" -------------------------------------------------- Host name: client.example.com Random password: W5YpARl=7M.n Password: True Keytab: False Managed by: server.example.com", "ipa hostgroup-add-member ipaservers --hosts client.example.com Host-group: ipaservers Description: IPA server hosts Member hosts: server.example.com, client.example.com ------------------------- Number of members added 1 -------------------------", "ipa-replica-install --password ' W5YpARl=7M.n '", "ipa-replica-install --setup-dns --forwarder 192.0.2.1", "DOMAIN= example.com NAMESERVER= replica", "for i in _ldap._tcp _kerberos._tcp _kerberos._udp _kerberos-master._tcp _kerberos-master._udp _ntp._udp ; do dig @USD{NAMESERVER} USD{i}.USD{DOMAIN} srv +nocmd +noquestion +nocomments +nostats +noaa +noadditional +noauthority done | egrep \"^_\" _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server1.example.com. _ldap._tcp.example.com. 86400 IN SRV 0 100 389 server2.example.com. _kerberos._tcp.example.com. 86400 IN SRV 0 100 88 server1.example.com.", "ipa-replica-install --setup-ca", "ipa-replica-install --dirsrv-cert-file /tmp/server.crt --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret", "[admin@server1 ~]USD ipa user-add test_user --first= Test --last= User", "[admin@server2 ~]USD ipa user-show test_user", "ipactl start", "ipactl stop", "ipactl restart", "[local_user@server ~]USD kinit Password for [email protected]:", "[local_user@server ~]USD kinit admin Password for [email protected]:", "klist Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] Valid starting Expires Service principal 11/10/2015 08:35:45 11/10/2015 18:35:45 krbtgt/[email protected]", "ipa user-add user_name", "ipa help topics automember Auto Membership Rule. automount Automount caacl Manage CA ACL rules.", "ipa help automember Auto Membership Rule. Bring clarity to the membership of hosts and users by configuring inclusive or exclusive regex patterns, you can automatically assign a new entries into a group or hostgroup based upon attribute information. EXAMPLES: Add the initial group or hostgroup: ipa hostgroup-add --desc=\"Web Servers\" webservers ipa group-add --desc=\"Developers\" devel", "ipa help commands automember-add Add an automember rule. automember-add-condition Add conditions to an automember rule.", "ipa automember-add --help Usage: ipa [global-options] automember-add AUTOMEMBER-RULE [options] Add an automember rule. Options: -h, --help show this help message and exit --desc=STR A description of this auto member rule", "ipaUserSearchFields: uid,givenname,sn,telephonenumber,ou,title", "ipa permission-add --permissions=read --permissions=write --permissions=delete", "ipa permission-add --permissions={read,write,delete}", "ipa certprofile-show certificate_profile --out= exported\\*profile.cfg", "ipa user-find --------------- 4 users matched ---------------", "ipa group-find keyword ---------------- 2 groups matched ----------------", "ipa group-find --user= user_name", "ipa group-find --no-user= user_name", "ipa host-show server.example.com Host name: server.example.com Principal name: host/[email protected]", "ipa config-mod --searchrecordslimit=500 --searchtimelimit=5", "ipa user-find --sizelimit=200 --timelimit=120", "https://server.example.com", "[admin@server ~]USD ipa idoverrideuser-add 'Default Trust View' [email protected]", "ipa-client-install --configure-firefox", "scp /etc/krb5.conf root@ externalmachine.example.com :/etc/krb5_ipa.conf", "export KRB5_CONFIG=/etc/krb5_ipa.conf", "ipa topologysuffix-find --------------------------- 2 topology suffixes matched --------------------------- Suffix name: ca Managed LDAP suffix DN: o=ipaca Suffix name: domain Managed LDAP suffix DN: dc=example,dc=com ---------------------------- Number of entries returned 2 ----------------------------", "ipa topologysegment-find Suffix name: domain ----------------- 1 segment matched ----------------- Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 1 ----------------------------", "ipa topologysegment-show Suffix name: domain Segment name: server1.example.com-to-server2.example.com Segment name: server1.example.com-to-server2.example.com Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa help topology", "ipa topologysuffix-show --help", "ipa topologysegment-add Suffix name: domain Left node: server1.example.com Right node: server2.example.com Segment name [server1.example.com-to-server2.example.com]: new_segment --------------------------- Added segment \"new_segment\" --------------------------- Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa topologysegment-show Suffix name: domain Segment name: new_segment Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both", "ipa topologysegment-find Suffix name: domain ------------------ 8 segments matched ------------------ Segment name: new_segment Left node: server1.example.com Right node: server2.example.com Connectivity: both ---------------------------- Number of entries returned 8 ----------------------------", "ipa topologysegment-del Suffix name: domain Segment name: new_segment ----------------------------- Deleted segment \"new_segment\" -----------------------------", "ipa topologysegment-find Suffix name: domain ------------------ 7 segments matched ------------------ Segment name: server2.example.com-to-server3.example.com Left node: server2.example.com Right node: server3.example.com Connectivity: both ---------------------------- Number of entries returned 7 ----------------------------", "ipa server-del Server name: server1.example.com Removing server1.example.com from replication topology, please wait ipa: ERROR: Server removal aborted: Removal of 'server1.example.com' leads to disconnected topology in suffix 'domain': Topology does not allow server server2.example.com to replicate with servers: server3.example.com server4.example.com", "[user@server2 ~]USD ipa server-del Server name: server1.example.com Removing server1.example.com from replication topology, please wait ---------------------------------------------------------- Deleted IPA server \"server1.example.com\" ----------------------------------------------------------", "ipa server-install --uninstall", "ipa config-show IPA masters: server1.example.com, server2.example.com, server3.example.com IPA CA servers: server1.example.com, server2.example.com IPA NTP servers: server1.example.com, server2.example.com, server3.example.com IPA CA renewal master: server1.example.com", "ipa server-show Server name: server.example.com Enabled server roles: CA server, DNS server, NTP server, KRA server", "ipa server-find --servrole \"CA server\" --------------------- 2 IPA servers matched --------------------- Server name: server1.example.com Server name: server2.example.com ---------------------------- Number of entries returned 2 ----------------------------", "ipa config-mod --ca-renewal-master-server new_ca_renewal_master.example.com IPA masters: old_ca_renewal_master.example.com, new_ca_renewal_master.example.com IPA CA servers: old_ca_renewal_master.example.com, new_ca_renewal_master.example.com IPA NTP servers: old_ca_renewal_master.example.com, new_ca_renewal_master.example.com IPA CA renewal master: new_ca_renewal_master.example.com", "ipa-crlgen-manage status CRL generation: enabled", "ipa-crlgen-manage disable", "ipa-crlgen-manage enable", "ipa server-state replica.idm.example.com --state=hidden", "ipa server-state replica.idm.example.com --state=enabled", "kinit admin", "ipa domainlevel-get ----------------------- Current domain level: 0 -----------------------", "kinit admin", "ipa domainlevel-set 1 ----------------------- Current domain level: 1 -----------------------", "yum update ipa-*", "NSSProtocol TLSv1.0,TLSv1.1,TLSv1.2", "systemctl restart httpd.service", "getcert list -d /var/lib/pki-ca/alias -n \"subsystemCert cert-pki-ca\" | grep post-save post-save command: /usr/lib64/ipa/certmonger/renew_ca_cert \"subsystemCert cert-pki-ca\"", "yum update ipa-*", "scp /usr/share/ipa/copy-schema-to-ca.py root@rhel6:/root/", "python copy-schema-to-ca.py ipa : INFO Installed /etc/dirsrv/slapd-PKI-IPA//schema/60kerberos.ldif [... output truncated ...] ipa : INFO Schema updated successfully", "ipa-replica-prepare rhel7.example.com --ip-address 192.0.2.1 Directory Manager (existing master) password: Preparing replica for rhel7.example.com from rhel6.example.com [... output truncated ...] The ipa-replica-prepare command was successful", "scp /var/lib/ipa/replica-info-replica.example.com.gpg root@rhel7:/var/lib/ipa/", "+ecdhe_rsa_aes_128_sha,+ecdhe_rsa_aes_256_sha", "ipa-replica-install /var/lib/ipa/replica-info-rhel7.example.com.gpg --setup-ca --ip-address 192.0.2.1 --setup-dns --forwarder 192.0.2.20 Directory Manager (existing master) password: Checking DNS forwarders, please wait Run connection check to master [... output truncated ...] Client configuration complete.", "ipactl status Directory Service: RUNNING [... output truncated ...] ipa: INFO: The ipactl command was successful", "[root@rhel7 ~]USD kinit admin [root@rhel7 ~]USD ipa-csreplica-manage list rhel6.example.com: master rhel7.example.com: master", "ipa-csreplica-manage list --verbose rhel7.example.com rhel7.example.com last init status: None last init ended: 1970-01-01 00:00:00+00:00 last update status: Error (0) Replica acquired successfully: Incremental update succeeded last update ended: 2017-02-13 13:55:13+00:00", "getcert stop-tracking -d /var/lib/pki-ca/alias -n \"auditSigningCert cert-pki-ca\" Request \"20201127184547\" removed. getcert stop-tracking -d /var/lib/pki-ca/alias -n \"ocspSigningCert cert-pki-ca\" Request \"20201127184548\" removed. getcert stop-tracking -d /var/lib/pki-ca/alias -n \"subsystemCert cert-pki-ca\" Request \"20201127184549\" removed. getcert stop-tracking -d /etc/httpd/alias -n ipaCert Request \"20201127184550\" removed.", "cp /usr/share/ipa/ca_renewal /var/lib/certmonger/cas/ chmod 0600 /var/lib/certmonger/cas/ca_renewal", "restorecon /var/lib/certmonger/cas/ca_renewal", "service certmonger restart", "getcert list-cas CA 'dogtag-ipa- retrieve -agent-submit': is-default: no ca-type: EXTERNAL helper-location: /usr/libexec/certmonger/dogtag-ipa-retrieve-agent-submit", "grep internal= /var/lib/pki-ca/conf/password.conf", "getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /var/lib/pki-ca/alias -n \"auditSigningCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/restart_pkicad \"auditSigningCert cert-pki-ca\"' -T \"auditSigningCert cert-pki-ca\" -P database_pin New tracking request \"20201127184743\" added. getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /var/lib/pki-ca/alias -n \"ocspSigningCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/restart_pkicad \"ocspSigningCert cert-pki-ca\"' -T \"ocspSigningCert cert-pki-ca\" -P database_pin New tracking request \"20201127184744\" added. getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /var/lib/pki-ca/alias -n \"subsystemCert cert-pki-ca\" -B /usr/lib64/ipa/certmonger/stop_pkicad -C '/usr/lib64/ipa/certmonger/restart_pkicad \"subsystemCert cert-pki-ca\"' -T \"subsystemCert cert-pki-ca\" -P database_pin New tracking request \"20201127184745\" added. getcert start-tracking -c dogtag-ipa-retrieve-agent-submit -d /etc/httpd/alias -n ipaCert -C /usr/lib64/ipa/certmonger/restart_httpd -T ipaCert -p /etc/httpd/alias/pwdfile.txt New tracking request \"20201127184746\" added.", "service pki-cad stop", "ca.crl.MasterCRL.enableCRLCache= false ca.crl.MasterCRL.enableCRLUpdates= false", "service pki-cad start", "RewriteRule ^/ipa/crl/MasterCRL.bin https://rhel6.example.com/ca/ee/ca/getCRL?op=getCRL&crlIssuingPoint=MasterCRL [L,R=301,NC]", "service httpd restart", "ipactl stop Stopping CA Service Stopping pki-ca: [ OK ] Stopping HTTP Service Stopping httpd: [ OK ] Stopping MEMCACHE Service Stopping ipa_memcached: [ OK ] Stopping DNS Service Stopping named: . [ OK ] Stopping KPASSWD Service Stopping Kerberos 5 Admin Server: [ OK ] Stopping KDC Service Stopping Kerberos 5 KDC: [ OK ] Stopping Directory Service Shutting down dirsrv: EXAMPLE-COM... [ OK ] PKI-IPA... [ OK ]", "mkdir -p /home/idm/backup/", "chown root:root /home/idm/backup/ chmod 700 /home/idm/backup/", "mv /var/lib/ipa/backup/* /home/idm/backup/", "rm -rf /var/lib/ipa/backup/", "ln -s /home/idm/backup/ /var/lib/ipa/backup/", "mkdir -p /home/idm/backup/", "chown root:root /home/idm/backup/ chmod 700 /home/idm/backup/", "mv /var/lib/ipa/backup/* /home/idm/backup/", "mount -o bind /home/idm/backup/ /var/lib/ipa/backup/", "/home/idm/backup/ /var/lib/ipa/backup/ none bind 0 0", "TMPDIR= /path/to/backup ipa-backup", "cat >keygen <<EOF > %echo Generating a standard key > Key-Type: RSA > Key-Length:2048 > Name-Real: IPA Backup > Name-Comment: IPA Backup > Name-Email: [email protected] > Expire-Date: 0 > %pubring /root/backup.pub > %secring /root/backup.sec > %commit > %echo done > EOF", "gpg --batch --gen-key keygen gpg --no-default-keyring --secret-keyring /root/backup.sec --keyring /root/backup.pub --list-secret-keys", "ipa-backup --gpg --gpg-keyring=/root/backup", "/usr/share/ipa/html /root/.pki /etc/pki-ca /etc/pki/pki-tomcat /etc/sysconfig/pki /etc/httpd/alias /var/lib/pki /var/lib/pki-ca /var/lib/ipa/sysrestore /var/lib/ipa-client/sysrestore /var/lib/ipa/dnssec /var/lib/sss/pubconf/krb5.include.d/ /var/lib/authconfig/last /var/lib/certmonger /var/lib/ipa /var/run/dirsrv /var/lock/dirsrv", "/etc/named.conf /etc/named.keytab /etc/resolv.conf /etc/sysconfig/pki-ca /etc/sysconfig/pki-tomcat /etc/sysconfig/dirsrv /etc/sysconfig/ntpd /etc/sysconfig/krb5kdc /etc/sysconfig/pki/ca/pki-ca /etc/sysconfig/ipa-dnskeysyncd /etc/sysconfig/ipa-ods-exporter /etc/sysconfig/named /etc/sysconfig/ods /etc/sysconfig/authconfig /etc/ipa/nssdb/pwdfile.txt /etc/pki/ca-trust/source/ipa.p11-kit /etc/pki/ca-trust/source/anchors/ipa-ca.crt /etc/nsswitch.conf /etc/krb5.keytab /etc/sssd/sssd.conf /etc/openldap/ldap.conf /etc/security/limits.conf /etc/httpd/conf/password.conf /etc/httpd/conf/ipa.keytab /etc/httpd/conf.d/ipa-pki-proxy.conf /etc/httpd/conf.d/ipa-rewrite.conf /etc/httpd/conf.d/nss.conf /etc/httpd/conf.d/ipa.conf /etc/ssh/sshd_config /etc/ssh/ssh_config /etc/krb5.conf /etc/ipa/ca.crt /etc/ipa/default.conf /etc/dirsrv/ds.keytab /etc/ntp.conf /etc/samba/smb.conf /etc/samba/samba.keytab /root/ca-agent.p12 /root/cacert.p12 /var/kerberos/krb5kdc/kdc.conf /etc/systemd/system/multi-user.target.wants/ipa.service /etc/systemd/system/multi-user.target.wants/sssd.service /etc/systemd/system/multi-user.target.wants/certmonger.service /etc/systemd/system/pki-tomcatd.target.wants/[email protected] /var/run/ipa/services.list /etc/opendnssec/conf.xml /etc/opendnssec/kasp.xml /etc/ipa/dnssec/softhsm2.conf /etc/ipa/dnssec/softhsm_pin_so /etc/ipa/dnssec/ipa-ods-exporter.keytab /etc/ipa/dnssec/ipa-dnskeysyncd.keytab /etc/idm/nssdb/cert8.db /etc/idm/nssdb/key3.db /etc/idm/nssdb/secmod.db /etc/ipa/nssdb/cert8.db /etc/ipa/nssdb/key3.db /etc/ipa/nssdb/secmod.db", "/var/log/pki-ca /var/log/pki/ /var/log/dirsrv/slapd-PKI-IPA /var/log/httpd /var/log/ipaserver-install.log /var/log/kadmind.log /var/log/pki-ca-install.log /var/log/messages /var/log/ipaclient-install.log /var/log/secure /var/log/ipaserver-uninstall.log /var/log/pki-ca-uninstall.log /var/log/ipaclient-uninstall.log /var/named/data/named.run", "ipa-restore /path/to/backup", "ipa-restore --instance=IPA-REALM /path/to/backup", "systemctl stop sssd", "find /var/lib/sss/ ! -type d | xargs rm -f", "systemctl start sssd", "ipa-restore --gpg-keyring=/root/backup /path/to/backup", "[jsmith@server ~]USD ipa selfservice-add \"Users can manage their own name details\" --permissions=write --attrs=givenname --attrs=displayname --attrs=title --attrs=initials ----------------------------------------------------------- Added selfservice \"Users can manage their own name details\" ----------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials", "[jsmith@server ~]USD ipa selfservice-mod \"Users can manage their own name details\" --attrs=givenname --attrs=displayname --attrs=title --attrs=initials --attrs=surname -------------------------------------------------------------- Modified selfservice \"Users can manage their own name details\" -------------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials", "ipa delegation-add \"basic manager attrs\" --attrs=manager --attrs=title --attrs=employeetype --attrs=employeenumber --group=engineering_managers --membergroup=engineering -------------------------------------- Added delegation \"basic manager attrs\" -------------------------------------- Delegation name: basic manager attrs Permissions: write Attributes: manager, title, employeetype, employeenumber Member user group: engineering User group: engineering_managers", "[jsmith@server ~]USD ipa delegation-mod \"basic manager attrs\" --attrs=manager --attrs=title --attrs=employeetype --attrs=employeenumber --attrs=displayname ----------------------------------------- Modified delegation \"basic manager attrs\" ----------------------------------------- Delegation name: basic manager attrs Permissions: write Attributes: manager, title, employeetype, employeenumber, displayname Member user group: engineering User group: engineering_managers", "kinit admin ipa role-add --desc=\"User Administrator\" useradmin ------------------------ Added role \"useradmin\" ------------------------ Role name: useradmin Description: User Administrator", "ipa role-add-privilege --privileges=\"User Administrators\" useradmin Role name: useradmin Description: User Administrator Privileges: user administrators ---------------------------- Number of privileges added 1 ----------------------------", "ipa role-add-member --groups=useradmins useradmin Role name: useradmin Description: User Administrator Member groups: useradmins Privileges: user administrators ------------------------- Number of members added 1 -------------------------", "cn=automount,dc=example,dc=com", "(!(objectclass=posixgroup))", "uid=*,cn=users,cn=accounts,dc=com", "ipa permission-add \"dns admin permission\"", "--bindtype=all", "--permissions=read --permissions=write --permissions={read,write}", "--attrs=description --attrs=automountKey --attrs={description,automountKey}", "ipa permission-add \"manage service\" --permissions=all --type=service --attrs=krbprincipalkey --attrs=krbprincipalname --attrs=managedby", "ipa permission-add \"manage automount locations\" --subtree=\"ldap://ldap.example.com:389/cn=automount,dc=example,dc=com\" --permissions=write --attrs=automountmapname --attrs=automountkey --attrs=automountInformation", "ipa permission-add \"manage Windows groups\" --filter=\"(!(objectclass=posixgroup))\" --permissions=write --attrs=description", "ipa permission-add ManageHost --permissions=\"write\" --subtree=cn=computers,cn=accounts,dc=testrelm,dc=com --attr=nshostlocation --memberof=admins", "ipa permission-mod 'System: Modify Users' --type=group ipa: ERROR: invalid 'ipapermlocation': not modifiable on managed permissions", "ipa permission-mod 'System: Modify Users' --excludedattrs=gecos ------------------------------------------ Modified permission \"System: Modify Users\"", "[jsmith@server ~]USD ipa privilege-add \"managing filesystems\" --desc=\"for filesystems\"", "[jsmith@server ~]USD ipa privilege-add-permission \"managing filesystems\" --permissions=\"managing automount\" --permissions=\"managing ftp services\"", "authconfig --enablemkhomedir --update", "ipa automountlocation-add userdirs Location: userdirs", "ipa automountkey-add userdirs auto.direct --key=/share --info=\"-ro,soft, server.example.com:/home/share\" Key: /share Mount information: -ro,soft, server.example.com:/home/share", "ipa user-add First name: first_name Last name: last_name User login [default_login]: custom_login", "ipa stageuser-add stage_user_login --first= first_name --last= last_name --email= email_address", "'(?!^[0-9]+USD)^[a-zA-Z0-9_.][a-zA-Z0-9_.-]*[a-zA-Z0-9_.USD-]?USD'", "ipa config-mod --maxusername=64 Maximum username length: 64", "ipa user-find --------------- 23 users matched --------------- User login: admin Last name: Administrator Home directory: /home/admin Login shell: /bin/bash UID: 1453200000 GID: 1453200000 Account disabled: False Password: True Kerberos keys available: True User login: user", "ipa user-find --title= user_title --------------- 2 users matched --------------- User login: user Job Title: Title User login: user2 Job Title: Title", "ipa user-find user --------------- 3 users matched --------------- User login: user User login: user2 User login: user3", "ipa user-show user_login User login: user_login First name: first_name Last name: last_name", "ipa stageuser-activate user_login ------------------------- Stage user user_login activated -------------------------", "ipa user-del user_login -------------------- Deleted user \"user3\" --------------------", "ipa user-del --preserve user_login -------------------- Deleted user \"user_login\" --------------------", "ipa stageuser-del user_login -------------------------- Deleted stage user \"user_login\" --------------------------", "ipa user-del --continue user1 user2 user3", "ipa user-undel user_login ------------------------------ Undeleted user account \"user_login\" ------------------------------", "ipa user-stage user_login ------------------------------ Staged user account \"user_login\" ------------------------------", "ipa user-mod user_login --title= new_title", "ipa user-mod user --addattr=mobile= new_mobile_number -------------------- Modified user \"user\" -------------------- User login: user Mobile Telephone Number: mobile_number, new_mobile_number", "ipa user-mod user --addattr=mobile= mobile_number_1 --addattr=mobile= mobile_number_2", "ipa user-mod user --email= [email protected] ipa user-mod user --addattr=mail= [email protected]", "ipa user-find User login: user First name: User Last name: User Home directory: /home/user Login shell: /bin/sh UID: 1453200009 GID: 1453200009 Account disabled: True Password: False Kerberos keys available: False", "ipa user-disable user_login ---------------------------- Disabled user account \"user_login\" ----------------------------", "ipa user-enable user_login ---------------------------- Enabled user account \"user_login\" ----------------------------", "kinit admin", "ipa role-add --desc \"Responsible for provisioning stage users\" \"System Provisioning\" -------------------------------- Added role \"System Provisioning\" -------------------------------- Role name: System Provisioning Description: Responsible for provisioning stage users", "ipa role-add-privilege \"System Provisioning\" --privileges=\"Stage User Provisioning\" Role name: System Provisioning Description: Responsible for provisioning stage users Privileges: Stage User Provisioning ---------------------------- Number of privileges added 1 ----------------------------", "ipa user-add stage_user_admin --password First name: first_name Last name: last_name Password: Enter password again to verify:", "ipa role-add-member \"System Provisioning\" --users=stage_user_admin Role name: System Provisioning Description: Responsible for provisioning stage users Member users: stage_user_admin Privileges: Stage User Provisioning ------------------------- Number of members added 1 -------------------------", "ipa role-show \"System Provisioning\" -------------- 1 role matched -------------- Role name: System provisioning Description: Responsible for provisioning stage users Member users: stage_user_admin Privileges: Stage User Provisioning ---------------------------- Number of entries returned 1 ----------------------------", "kinit stage_user_admin Password for [email protected]: Password expired. You must change it now. Enter new password: Enter it again:", "klist Ticket cache: KEYRING:persistent:0:krb_ccache_xIlCQDW Default principal: [email protected] Valid starting Expires Service principal 02/25/2016 11:42:20 02/26/2016 11:42:20 krbtgt/EXAMPLE.COM", "ipa stageuser-add stage_user First name: first_name Last name: last_name ipa: ERROR: stage_user: stage user not found", "ipa stageuser-show stage_user ipa: ERROR: stage_user: stage user not found", "kinit admin Password for [email protected]: ipa stageuser-show stage_user User login: stage_user First name: Stage Last name: User", "ipa user-add provisionator --first=provisioning --last=account --password", "ipa role-add --desc \"Responsible for provisioning stage users\" \"System Provisioning\"", "ipa role-add-privilege \"System Provisioning\" --privileges=\"Stage User Provisioning\"", "ipa role-add-member --users=provisionator \"System Provisioning\"", "ipa user-add activator --first=activation --last=account --password", "ipa role-add-member --users=activator \"User Administrator\"", "ipa group-add service-accounts", "ipa pwpolicy-add service-accounts --maxlife=10000 --minlife=0 --history=0 --minclasses=4 --minlength=20 --priority=1 --maxfail=0 --failinterval=1 --lockouttime=0", "ipa group-add-member service-accounts --users={provisionator,activator}", "kpasswd provisionator kpasswd activator", "ipa-getkeytab -s example.com -p \"activator\" -k /etc/krb5.ipa-activation.keytab", "#!/bin/bash kinit -k -i activator ipa stageuser-find --all --raw | grep \" uid:\" | cut -d \":\" -f 2 | while read uid; do ipa stageuser-activate USD{uid}; done", "chmod 755 /usr/local/sbin/ipa-activate-all chown root:root /usr/local/sbin/ipa-activate-all", "[Unit] Description=Scan IdM every minute for any stage users that must be activated [Service] Environment=KRB5_CLIENT_KTNAME=/etc/krb5.ipa-activation.keytab Environment=KRB5CCNAME=FILE:/tmp/krb5cc_ipa-activate-all ExecStart=/usr/local/sbin/ipa-activate-all", "[Unit] Description=Scan IdM every minute for any stage users that must be activated [Timer] OnBootSec=15min OnUnitActiveSec=1min [Install] WantedBy=multi-user.target", "systemctl enable ipa-activate-all.timer", "dn: uid= user_login ,cn=staged users,cn=accounts,cn=provisioning,dc= example ,dc=com changetype: add objectClass: top objectClass: inetorgperson uid: user_login sn: surname givenName: first_name cn: full_name", "dn: uid= user_login ,cn=staged users,cn=accounts,cn=provisioning,dc= example ,dc=com changetype: add objectClass: top objectClass: person objectClass: inetorgperson objectClass: organizationalperson objectClass: posixaccount uid: user_login uidNumber: UID_number gidNumber: GID_number sn: surname givenName: first_name cn: full_name homeDirectory: /home/ user_login", "ldapsearch -LLL -x -D \"uid= user_allowed_to_read ,cn=users,cn=accounts,dc=example, dc=com\" -w \" password \" -H ldap:// server.example.com -b \"cn=users, cn=accounts, dc=example, dc=com\" uid= user_login", "dn: distinguished_name changetype: modify replace: attribute_to_modify attribute_to_modify: new_value", "dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: TRUE", "dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: FALSE", "dn: distinguished_name changetype: modrdn newrdn: uid= user_login deleteoldrdn: 0 newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=example", "dn: cn= group_distinguished_name ,cn=groups,cn=accounts,dc=example,dc=com changetype: add objectClass: top objectClass: ipaobject objectClass: ipausergroup objectClass: groupofnames objectClass: nestedgroup objectClass: posixgroup cn: group_name gidNumber: GID_number", "ldapsearch -YGSSAPI -H ldap:// server.example.com -b \"cn=groups,cn=accounts,dc=example,dc=com\" \"cn= group_name \"", "dn: group_distinguished_name changetype: delete", "dn: group_distinguished_name changetype: modify add: member member: uid= user_login ,cn=users,cn=accounts,dc=example,dc=com", "dn: distinguished_name changetype: modify delete: member member: uid= user_login ,cn=users,cn=accounts,dc=example,dc=com", "ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: admin@EXAMPLE SASL SSF: 56 SASL data security layer installed. dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=example changetype: add objectClass: top objectClass: inetorgperson cn: Stage sn: User adding new entry \"uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=example\"", "ipa stageuser-show stageuser --all --raw dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=example uid: stageuser sn: User cn: Stage has_password: FALSE has_keytab: FALSE nsaccountlock: TRUE objectClass: top objectClass: inetorgperson objectClass: organizationalPerson objectClass: person", "ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: admin@EXAMPLE SASL SSF: 56 SASL data security layer installed. dn: uid=user1,cn=users,cn=accounts,dc=example changetype: modrdn newrdn: uid=user1 deleteoldrdn: 0 newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=example modifying rdn of entry \"uid=user1,cn=users,cn=accounts,dc=example\"", "ipa user-find --preserved=true --------------- 1 user matched --------------- User login: user1 First name: first_name Last name: last_name ---------------------------- Number of entries returned 1 ----------------------------", "ipa host-add client1.example.com", "ipa host-add --force --ip-address=192.168.166.31 client1.example.com", "ipa host-add --force client1.example.com", "ipa host-del --updatedns client1.example.com", "[jsmith@ipaserver ~]USD kinit admin [jsmith@ipaserver ~]USD ipa host-disable server.example.com", "ipa-getkeytab -s server.example.com -p host/client.example.com -k /etc/krb5.keytab -D \"cn=directory manager\" -w password", "host.example.com,1.2.3.4 ssh-rsa AAA...ZZZ==", "\"ssh-rsa ABCD1234...== ipaclient.example.com\"", "ssh-rsa AAA...ZZZ== host.example.com,1.2.3.4", "server.example.com,1.2.3.4 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEApvjBvSFSkTU0WQW4eOweeo0DZZ08F9Ud21xlLy6FOhzwpXFGIyxvXZ52+siHBHbbqGL5+14N7UvElruyslIHx9LYUR/pPKSMXCGyboLy5aTNl5OQ5EHwrhVnFDIKXkvp45945R7SKYCUtRumm0Iw6wq0XD4o+ILeVbV3wmcB1bXs36ZvC/M6riefn9PcJmh6vNCvIsbMY6S+FhkWUTTiOXJjUDYRLlwM273FfWhzHK+SSQXeBp/zIn1gFvJhSZMRi9HZpDoqxLbBB9QIdIw6U4MIjNmKsSI/ASpkFm2GuQ7ZK9KuMItY2AoCuIRmRAdF8iYNHBTXNfFurGogXwRDjQ==", "[jsmith@server ~]USD ssh-keygen -t rsa -C \"server.example.com,1.2.3.4\" Generating public/private rsa key pair. Enter file in which to save the key (/home/jsmith/.ssh/id_rsa): /home/jsmith/.ssh/host_keys Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/jsmith/.ssh/host_keys. Your public key has been saved in /home/jsmith/.ssh/host_keys.pub. The key fingerprint is: SHA256:GAUIDVVEgly7rs1lTWP6oguHz8BKvyZkpqCqVSsmi7c server.example.com The key's randomart image is: +--[ RSA 2048]----+ | .. | | .+| | o .* | | o . .. *| | S + . o+| | E . .. .| | . = . o | | o . ..o| | .....| +-----------------+", "[jsmith@server ~]USD cat /home/jsmith/.ssh/host_keys.pub ssh-rsa AAAAB3NzaC1yc2E...tJG1PK2Mq++wQ== server.example.com,1.2.3.4", "[jsmith@server ~]USD ipa host-mod --sshpubkey=\"ssh-rsa RjlzYQo==\" --updatedns host1.example.com", "--sshpubkey=\"RjlzYQo==\" --sshpubkey=\"ZEt0TAo==\"", "[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa host-mod --sshpubkey= --updatedns host1.example.com", "cn=server,ou=ethers,dc=example,dc=com", "[jsmith@server ~]USD kinit admin [jsmith@server ~]USD ipa host-mod --macaddress=12:34:56:78:9A:BC server.example.com", "ethers: ldap", "getent ethers server.example.com", "ipa group-show group_A Member users: user_1 Member groups: group_B Indirect Member users: user_2", "ipa group-find --private ---------------- 2 groups matched ---------------- Group name: user1 Description: User private group for user1 GID: 830400006 Group name: user2 Description: User private group for user2 GID: 830400004 ---------------------------- Number of entries returned 2 ----------------------------", "kinit admin", "ipa group-add group_name ----------------------- Added group \"group_name\" ------------------------", "kinit admin", "ipa group-del group_name -------------------------- Deleted group \"group_name\" --------------------------", "sss_cache -n host_group_name", "ipa group-add-member group_name --users= user1 --users= user2 --groups= group1", "ipa group-add-member group_name --external=' AD_DOMAIN \\ ad_user ' ipa group-add-member group_name --external=' ad_user @ AD_DOMAIN ' ipa group-add-member group_name --external=' ad_user @ AD_DOMAIN.EXAMPLE.COM '", "ipa group-remove-member group_name --users= user1 --users= user2 --groups= group1", "kinit admin", "ipa-managed-entries --list", "ipa-managed-entries -e \"UPG Definition\" disable Disabling Plugin", "systemctl restart dirsrv.target", "ipa config-mod --usersearch=\"uid,givenname,sn,telephonenumber,ou,title\" ipa config-mod --groupsearch=\"cn,description\"", "ipa automember-add Automember Rule: user_group Grouping Type: group -------------------------------- Added automember rule \"user_group\" -------------------------------- Automember Rule: user_group", "ipa automember-add-condition Automember Rule: user_group Attribute Key: uid Grouping Type: group [Inclusive Regex]: .* [Exclusive Regex]: ---------------------------------- Added condition(s) to \"user_group\" ---------------------------------- Automember Rule: user_group Inclusive Regex: uid=.* ---------------------------- Number of conditions added 1 ----------------------------", "ipa automember-add Automember Rule: all_hosts Grouping Type: hostgroup ------------------------------------- Added automember rule \"all_hosts\" ------------------------------------- Automember Rule: all_hosts", "ipa automember-add-condition Automember Rule: all_hosts Attribute Key: fqdn Grouping Type: hostgroup [Inclusive Regex]: .* [Exclusive Regex]: --------------------------------- Added condition(s) to \"all_hosts\" --------------------------------- Automember Rule: all_hosts Inclusive Regex: fqdn=.* ---------------------------- Number of conditions added 1 ----------------------------", "ipa automember-add Automember Rule: ad_users Grouping Type: group ------------------------------------- Added automember rule \"ad_users\" ------------------------------------- Automember Rule: ad_users", "ipa automember-add-condition Automember Rule: ad_users Attribute Key: objectclass Grouping Type: group [Inclusive Regex]: ntUser [Exclusive Regex]: ------------------------------------- Added condition(s) to \"ad_users\" ------------------------------------- Automember Rule: ad_users Inclusive Regex: objectclass=ntUser ---------------------------- Number of conditions added 1 ----------------------------", "ipa automember-rebuild --type=group -------------------------------------------------------- Automember rebuild task finished. Processed (9) entries. --------------------------------------------------------", "ipa automember-rebuild --users= user1 --users= user2 -------------------------------------------------------- Automember rebuild task finished. Processed (2) entries. --------------------------------------------------------", "ipa automember-default-group-set Default (fallback) Group: default_user_group Grouping Type: group --------------------------------------------------- Set default (fallback) group for automember \"default_user_group\" --------------------------------------------------- Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com", "ipa automember-default-group-show Grouping Type: group Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com", "ipa-replica-manage dnarange-show masterA.example.com: 1001-1500 masterB.example.com: 1501-2000 masterC.example.com: No range set ipa-replica-manage dnarange-show masterA.example.com masterA.example.com: 1001-1500", "ipa-replica-manage dnanextrange-show masterA.example.com: 1001-1500 masterB.example.com: No on-deck range set masterC.example.com: No on-deck range set ipa-replica-manage dnanextrange-show masterA.example.com masterA.example.com: 1001-1500", "ipa-replica-manage dnarange-set masterA.example.com 1250-1499", "ipa-replica-manage dnanextrange-set masterB.example.com 1001-5000", "sss_cache -u user", "[bjensen@server ~]USD ipa config-mod --userobjectclasses= {top,person,organizationalperson,inetorgperson,inetuser,posixaccount,krbprincipalaux,krbticketpolicyaux,ipaobject,ipasshuser, employeeinfo }", "set -o braceexpand", "[bjensen@server ~]USD ipa config-mod --groupobjectclasses= {top,groupofnames,nestedgroup,ipausergroup,ipaobject,ipasshuser, employeegroup }", "[bjensen@server ~]USD kinit admin [bjensen@server ~]USD ipa config-show --all dn: cn=ipaConfig,cn=etc,dc=example,dc=com Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers Default e-mail domain: example.com Search time limit: 2 Search size limit: 100 User search fields: uid,givenname,sn,telephonenumber,ou,title Group search fields: cn,description Enable migration mode: FALSE Certificate Subject base: O=EXAMPLE.COM Default group objectclasses: top, groupofnames, nestedgroup, ipausergroup, ipaobject Default user objectclasses: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser Password Expiration Notification (days): 4 Password plugin features: AllowNThash SELinux user map order: guest_u:s0USDxguest_u:s0USDuser_u:s0USDstaff_u:s0-s0:c0.c1023USDunconfined_u:s0-s0:c0.c1023 Default SELinux user: unconfined_u:s0-s0:c0.c1023 Default PAC types: MS-PAC, nfs:NONE cn: ipaConfig objectclass: nsContainer, top, ipaGuiConfig, ipaConfigObject", "# ipa-getkeytab -s ipaserver.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts", "ipa service-add serviceName/hostname", "ipa service-add HTTP/server.example.com ------------------------------------------------------- Added service \"HTTP/[email protected]\" ------------------------------------------------------- Principal: HTTP/[email protected] Managed by: ipaserver.example.com", "ipa-getkeytab -s server.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts", "ipa-getkeytab -s kdc.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts", "kinit admin", "ipa dnsrecord-add idm.example.com cluster --a-rec={192.0.2.40,192.0.2.41} Record name: cluster A record: 192.0.2.40, 192.0.2.41", "ipa host-add cluster.idm.example.com ------------------------------------ Added host \"cluster.idm.example.com\" ------------------------------------ Host name: cluster.idm.example.com Principal name: host/[email protected] Password: False Keytab: False Managed by: cluster.idm.example.com", "ipa service-add HTTP/cluster.idm.example.com ------------------------------------------------------------ Added service \"HTTP/[email protected]\" ------------------------------------------------------------ Principal: HTTP/[email protected] Managed by: cluster.idm.example.com", "ipa service-allow-retrieve-keytab HTTP/cluster.idm.example.com --hosts={node01.idm.example.com,node02.idm.example.com} Principal: HTTP/[email protected] Managed by: cluster.idm.example.com Hosts allowed to retrieve keytab: node01.idm.example.com, node02.idm.example.com ------------------------- Number of members added 2 -------------------------", "ipa service-allow-create-keytab HTTP/cluster.idm.example.com --hosts=node01.idm.example.com Principal: HTTP/[email protected] Managed by: cluster.idm.example.com Hosts allowed to retrieve keytab: node01.idm.example.com, node02.idm.example.com Hosts allowed to create keytab: node01.idm.example.com ------------------------- Number of members added 1 -------------------------", "kinit -kt /etc/krb5.keytab", "ipa-getkeytab -s ipaserver.idm.example.com -p HTTP/cluster.idm.example.com -k /tmp/client.keytab", "ipa-getkeytab -r -s ipaserver.idm.example.com -p HTTP/cluster.idm.example.com -k /tmp/client.keytab", "[jsmith@ipaserver ~]USD kinit admin [jsmith@ipaserver ~]USD ipa service-disable HTTP/server.example.com", "ipa-getkeytab -s ipaserver.example.com -p HTTP/server.example.com -k /etc/httpd/conf/krb5.keytab -e aes256-cts", "ipa service-add-host principal --hosts= hostname", "ipa service-add HTTP/web.example.com ipa service-add-host HTTP/web.example.com --hosts=client1.example.com", "kinit -kt /etc/krb5.keytab host/client1.example.com ipa-getkeytab -s server.example.com -k /tmp/test.keytab -p HTTP/web.example.com Keytab successfully retrieved and stored in: /tmp/test.keytab", "kinit -kt /etc/krb5.keytab host/client1.example.com openssl req -newkey rsa:2048 -subj '/CN=web.example.com/O=EXAMPLE.COM' -keyout /etc/pki/tls/web.key -out /tmp/web.csr -nodes Generating a 2048 bit RSA private key .............................................................+++ ............................................................................................+++ Writing new private key to '/etc/pki/tls/private/web.key'", "ipa cert-request --principal=HTTP/web.example.com web.csr Certificate: MIICETCCAXqgA...[snip] Subject: CN=web.example.com,O=EXAMPLE.COM Issuer: CN=EXAMPLE.COM Certificate Authority Not Before: Tue Feb 08 18:51:51 2011 UTC Not After: Mon Feb 08 18:51:51 2016 UTC Serial number: 1005", "kinit admin", "ipa host-add-managedby client2.example.com --hosts=client1.example.com", "kinit -kt /etc/krb5.keytab host/client1.example.com", "ipa-getkeytab -s server.example.com -k /tmp/client2.keytab -p host/client2.example.com Keytab successfully retrieved and stored in: /tmp/client2.keytab", "kinit -kt /etc/krb5.keytab host/[email protected]", "kinit -kt /etc/httpd/conf/krb5.keytab HTTP/[email protected]", "ipa help idviews", "ipa idview-add --help", "kinit admin", "ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1", "ipa idoverrideuser-add example_for_host1 user --sshpubkey=\" ssh-rsa AAAAB3NzaC1yrRqFE...gWRL71/miPIZ [email protected] \" ----------------------------- Added User ID override \"user\" ----------------------------- Anchor to override: user SSH public key: ssh-rsa AAAB3NzaC1yrRqFE...gWRL71/miPIZ [email protected]", "ipa idoverrideuser-add-cert example_for_host1 user --certificate=\"MIIEATCC...\"", "ipa idview-apply example_for_host1 --hosts=host1.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------", "ipa service-mod service/[email protected] --ok-as-delegate= 1", "ipa service-mod test/[email protected] --requires-pre-auth= 0", "kvno test/[email protected] klist -f Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] Valid starting Expires Service principal 02/19/2014 09:59:02 02/20/2014 08:21:33 test/ipa/[email protected] Flags: FAT O", "kadmin.local kadmin.local: getprinc test/ipa.example.com Principal: test/[email protected] Expiration date: [never] Attributes: REQUIRES_PRE_AUTH OK_AS_DELEGATE OK_TO_AUTH_AS_DELEGATE Policy: [none]", "ipa user-add-principal user useralias -------------------------------- Added new aliases to user \"user\" -------------------------------- User login: user Principal alias: [email protected], [email protected]", "kinit -C useralias Password for [email protected]:", "ipa user-remove-principal user useralias -------------------------------- Removed aliases from user \"user\" -------------------------------- User login: user Principal alias: [email protected]", "ipa user-show user User login: user Principal name: [email protected] ipa user-remove-principal user user ipa: ERROR: invalid 'krbprincipalname': at least one value equal to the canonical principal name must be present", "ipa: ERROR: The realm for the principal does not match the realm for this IPA server", "ipa user-add-principal user user\\\\@example.com -------------------------------- Added new aliases to user \"user\" -------------------------------- User login: user Principal alias: [email protected], user\\@[email protected]", "kinit -E [email protected] Password for user\\@[email protected]:", "ipa user-remove-principal user user\\\\@example.com -------------------------------- Removed aliases from user \"user\" -------------------------------- User login: user Principal alias: [email protected]", "( host.example.com ,, nisdomain.example.com ) (-, user , nisdomain.example.com )", "dn: ipaUniqueID=d4453480-cc53-11dd-ad8b-0800200c9a66,cn=ng,cn=alt, cn: netgroup1 memberHost: fqdn=host1.example.com,cn=computers,cn=accounts, memberHost: cn=VirtGuests,cn=hostgroups,cn=accounts, memberUser: cn=demo,cn=users,cn=accounts, memberUser: cn=Engineering,cn=groups,cn=accounts, nisDomainName: nisdomain.example.com", "ipa netgroup-show netgroup1 Netgroup name: netgroup1 Description: my netgroup NIS domain name: nisdomain.example.com Member Host: VirtGuests Member Host: host1.example.com Member User: demo Member User: Engineering", "ipa-nis-manage enable ipa-compat-manage enable", "ldapmodify -x -D 'cn=directory manager' -W dn: cn=NIS Server,cn=plugins,cn=config changetype: modify add: nsslapd-pluginarg0 nsslapd-pluginarg0: 514", "systemctl enable rpcbind.service systemctl start rpcbind.service", "systemctl restart dirsrv.target", "ipa netgroup-add --desc=\"Netgroup description\" --nisdomain=\"example.com\" example-netgroup", "ipa netgroup-add-member --users= user_name --groups= group_name --hosts= host_name --hostgroups= host_group_name --netgroups= netgroup_name group_nameame", "ipa netgroup-add-member --users={user1;user2,user3} --groups={group1,group2} example-group", "ldapadd -h server.example.com -x -D \"cn=Directory Manager\" -W dn: nis-domain=example.com+nis-map=auto.example,cn=NIS Server,cn=plugins,cn=config objectClass: extensibleObject nis-domain: example.com nis-map: auto.example nis-filter: (objectclass=automount) nis-key-format: %{automountKey} nis-value-format: %{automountInformation} nis-base: automountmapname=auto.example,cn=default,cn=automount,dc=example,dc=com", "ypcat -k -d example.com -h server.example.com auto.example", "yum install yp-tools -y", "#!/bin/sh USD1 is the NIS domain, USD2 is the NIS master server ypcat -d USD1 -h USD2 passwd > /dev/shm/nis-map.passwd 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.passwd) ; do IFS=' ' username=USD(echo USDline | cut -f1 -d:) # Not collecting encrypted password because we need cleartext password # to create kerberos key uid=USD(echo USDline | cut -f3 -d:) gid=USD(echo USDline | cut -f4 -d:) gecos=USD(echo USDline | cut -f5 -d:) homedir=USD(echo USDline | cut -f6 -d:) shell=USD(echo USDline | cut -f7 -d:) # Now create this entry echo passw0rd1 | ipa user-add USDusername --first=NIS --last=USER --password --gidnumber=USDgid --uid=USDuid --gecos=\"USDgecos\" --homedir=USDhomedir --shell=USDshell ipa user-show USDusername done", "kinit admin", "sh /root/nis-users.sh nisdomain nis-master.example.com", "#!/bin/sh USD1 is the NIS domain, USD2 is the NIS master server ypcat -d USD1 -h USD2 group > /dev/shm/nis-map.group 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.group); do IFS=' ' groupname=USD(echo USDline | cut -f1 -d:) # Not collecting encrypted password because we need cleartext password # to create kerberos key gid=USD(echo USDline | cut -f3 -d:) members=USD(echo USDline | cut -f4 -d:) # Now create this entry ipa group-add USDgroupname --desc=NIS_GROUP_USDgroupname --gid=USDgid if [ -n \"USDmembers\" ]; then ipa group-add-member USDgroupname --users={USDmembers} fi ipa group-show USDgroupname done", "kinit admin", "sh /root/nis-groups.sh nisdomain nis-master.example.com", "#!/bin/sh USD1 is the NIS domain, USD2 is the NIS master server ypcat -d USD1 -h USD2 hosts | egrep -v \"localhost|127.0.0.1\" > /dev/shm/nis-map.hosts 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.hosts); do IFS=' ' ipaddress=USD(echo USDline | awk '{print USD1}') hostname=USD(echo USDline | awk '{print USD2}') master=USD(ipa env xmlrpc_uri | tr -d '[:space:]' | cut -f3 -d: | cut -f3 -d/) domain=USD(ipa env domain | tr -d '[:space:]' | cut -f2 -d:) if [ USD(echo USDhostname | grep \"\\.\" |wc -l) -eq 0 ] ; then hostname=USD(echo USDhostname.USDdomain) fi zone=USD(echo USDhostname | cut -f2- -d.) if [ USD(ipa dnszone-show USDzone 2>/dev/null | wc -l) -eq 0 ] ; then ipa dnszone-add --name-server=USDmaster --admin-email=root.USDmaster fi ptrzone=USD(echo USDipaddress | awk -F. '{print USD3 \".\" USD2 \".\" USD1 \".in-addr.arpa.\"}') if [ USD(ipa dnszone-show USDptrzone 2>/dev/null | wc -l) -eq 0 ] ; then ipa dnszone-add USDptrzone --name-server=USDmaster --admin-email=root.USDmaster fi # Now create this entry ipa host-add USDhostname --ip-address=USDipaddress ipa host-show USDhostname done", "kinit admin", "sh /root/nis-hosts.sh nisdomain nis-master.example.com", "#!/bin/sh USD1 is the NIS domain, USD2 is the NIS master server ypcat -k -d USD1 -h USD2 netgroup > /dev/shm/nis-map.netgroup 2>&1 IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.netgroup); do IFS=' ' netgroupname=USD(echo USDline | awk '{print USD1}') triples=USD(echo USDline | sed \"s/^USDnetgroupname //\") echo \"ipa netgroup-add USDnetgroupname --desc=NIS_NG_USDnetgroupname\" if [ USD(echo USDline | grep \"(,\" | wc -l) -gt 0 ]; then echo \"ipa netgroup-mod USDnetgroupname --hostcat=all\" fi if [ USD(echo USDline | grep \",,\" | wc -l) -gt 0 ]; then echo \"ipa netgroup-mod USDnetgroupname --usercat=all\" fi for triple in USDtriples; do triple=USD(echo USDtriple | sed -e 's/-//g' -e 's/(//' -e 's/)//') if [ USD(echo USDtriple | grep \",.*,\" | wc -l) -gt 0 ]; then hostname=USD(echo USDtriple | cut -f1 -d,) username=USD(echo USDtriple | cut -f2 -d,) domain=USD(echo USDtriple | cut -f3 -d,) hosts=\"\"; users=\"\"; doms=\"\"; [ -n \"USDhostname\" ] && hosts=\"--hosts=USDhostname\" [ -n \"USDusername\" ] && users=\"--users=USDusername\" [ -n \"USDdomain\" ] && doms=\"--nisdomain=USDdomain\" echo \"ipa netgroup-add-member USDnetgroup USDhosts USDusers USDdoms\" else netgroup=USDtriple echo \"ipa netgroup-add USDnetgroup --desc=NIS_NG_USDnetgroup\" fi done done", "kinit admin", "sh /root/nis-netgroups.sh nisdomain nis-master.example.com", "#!/bin/sh USD1 is for the automount entry in ipa ipa automountlocation-add USD1 USD2 is the NIS domain, USD3 is the NIS master server, USD4 is the map name ypcat -k -d USD2 -h USD3 USD4 > /dev/shm/nis-map.USD4 2>&1 ipa automountmap-add USD1 USD4 basedn=USD(ipa env basedn | tr -d '[:space:]' | cut -f2 -d:) cat > /tmp/amap.ldif <<EOF dn: nis-domain=USD2+nis-map=USD4,cn=NIS Server,cn=plugins,cn=config objectClass: extensibleObject nis-domain: USD2 nis-map: USD4 nis-base: automountmapname=USD4,cn=USD1,cn=automount,USDbasedn nis-filter: (objectclass=*) nis-key-format: %{automountKey} nis-value-format: %{automountInformation} EOF ldapadd -x -h USD3 -D \"cn=Directory Manager\" -W -f /tmp/amap.ldif IFS=USD'\\n' for line in USD(cat /dev/shm/nis-map.USD4); do IFS=\" \" key=USD(echo \"USDline\" | awk '{print USD1}') info=USD(echo \"USDline\" | sed -e \"s#^USDkey[ \\t]*##\") ipa automountkey-add nis USD4 --key=\"USDkey\" --info=\"USDinfo\" done", "kinit admin", "sh /root/nis-automounts.sh location nisdomain nis-master.example.com map_name", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h ipaserver.example.com -x dn: cn=config changetype: modify replace: passwordStorageScheme passwordStorageScheme: crypt", "ipa user-mod user --password Password: Enter Password again to verify: -------------------- Modified user \"user\" --------------------", "ldapmodify -x -D \"cn=Directory Manager\" -W -h ldap.example.com -p 389 dn: cn=ipa_pwd_extop,cn=plugins,cn=config changetype: modify add: passSyncManagersDNs passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com", "ipa user-unlock user ----------------------- Unlocked account \"user\" -----------------------", "ipa user-status user ----------------------- Account disabled: False ----------------------- Server: example.com Failed logins: 8 Last successful authentication: 20160229080309Z Last failed authentication: 20160229080317Z Time now: 2016-02-29T08:04:46Z ---------------------------- Number of entries returned 1 ----------------------------", "ipa config-show | grep \"Password plugin features\" Password plugin features: AllowNThash , KDC:Disable Last Success", "ipa config-mod --ipaconfigstring='AllowNThash'", "ipa config-mod --ipaconfigstring='AllowNThash' --ipaconfigstring='KDC:Disable Lockout'", "ipactl restart", "First Factor: Second Factor (optional):", "First factor: static_password Second factor: one-time_password", "First factor: static_password Second factor: one-time_password", "[Service] Environment=OPENSSL_FIPS_NON_APPROVED_MD5_ALLOW=1", "systemctl daemon-reload", "systemctl start radiusd", "ipa config-mod --user-auth-type=otp", "ipa config-mod --user-auth-type=otp --user-auth-type=disabled", "ipa user-mod user --user-auth-type=otp", "ipa config-mod --user-auth-type=otp --user-auth-type=password", "ipa otptoken-add ------------------ Added OTP token \"\" ------------------ Unique ID: 7060091b-4e40-47fd-8354-cb32fecd548a Type: TOTP", "ipa otptoken-add-yubikey --slot=2", "ipa otptoken-add --owner=user ------------------ Added OTP token \"\" ------------------ Unique ID: 5303baa8-08f9-464e-a74d-3b38de1c041d Type: TOTP", "ipa otptoken-add-yubikey --owner=user", "[otp] DEFAULT = { timeout = 120 }", "systemctl restart krb5kdc", "ipa user-mod --user-auth-type=password --user-auth-type=otp user_name", "ipa otptoken-add --desc=\" New Token \"", "ipa otptoken-find -------------------- 2 OTP tokens matched -------------------- Unique ID: 4ce8ec29-0bf7-4100-ab6d-5d26697f0d8f Type: TOTP Description: New Token Owner: user Unique ID: e1e9e1ef-172c-4fa9-b637-6b017ce79315 Type: TOTP Description: Old Token Owner: user ---------------------------- Number of entries returned 2 ----------------------------", "# ipa otptoken-del e1e9e1ef-172c-4fa9-b637-6b017ce79315 -------------------------------------------------------- Deleted OTP token \" e1e9e1ef-172c-4fa9-b637-6b017ce79315 \" --------------------------------------------------------", "ipa user-mod --user-auth-type=otp user_name", "ipa host-mod server.example.com --auth-ind=otp --------------------------------------------------------- Modified host \"server.example.com\" --------------------------------------------------------- Host name: server.example.com Authentication Indicators: otp", "pkinit_indicator = indicator", "systemctl restart krb5kdc", "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMM4xPu54Kf2dx7C4Ta2F7vnIzuL1i6P21TTKniSkjFuA+r qW06588e7v14Im4VejwnNk352gp49A62qSVOzp8IKA9xdtyRmHYCTUvmkcyspZvFRI713zfRKQVFyJOqHmW/m dCmak7QBxYou2ELSPhH3pe8MYTQIulKDSu5Zbsrqedg1VGkSJxf7mDnCSPNWWzAY9AFB9Lmd2m2xZmNgVAQEQ nZXNMaIlroLD/51rmMSkJGHGb1O68kEq9Z client.example.com", "ssh-keygen -t rsa -C [email protected] Generating public/private rsa key pair. Enter file in which to save the key (/home/user/.ssh/id_rsa): Created directory '/home/user/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: SHA256:GAUIDVVEgly7rs1lTWP6oguHz8BKvyZkpqCqVSsmi7c [email protected] The key's randomart image is: +--[ RSA 2048]----+ | | | + . | | + = . | | = + | | . E S.. | | . . .o | | . . . oo. | | . o . +.+o | | o .o..o+o | +-----------------+", "ipa user-mod user --sshpubkey=\" ssh-rsa AAAAB3Nza...SNc5dv== client.example.com \"", "--sshpubkey=\"AAAAB3Nza...SNc5dv==\" --sshpubkey=\"RjlzYQo...ZEt0TAo=\"", "ipa user-mod user --sshpubkey=\"USD(cat ~/.ssh/id_rsa.pub)\" --sshpubkey=\"USD(cat ~/.ssh/id_rsa2.pub)\"", "ipa user-mod user --sshpubkey=", "ProxyCommand /usr/bin/sss_ssh_knownhostsproxy -p %p %h GlobalKnownHostsFile /var/lib/sss/pubconf/known_hosts", "AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys AuthorizedKeysCommandUser user", "certutil -L -d /etc/pki/nssdb/ -h all Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI my_certificate CT,C,C", "certutil -L -d /etc/pki/nssdb/ -n ' my_certificate ' -r | base64 -w 0 > user.crt", "ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})'", "openssl x509 -noout -issuer -in idm_user.crt -nameopt RFC2253 issuer=CN=Certificate Authority,O=REALM.EXAMPLE.COM", "# openssl x509 -noout -issuer -in ad_user.crt -nameopt RFC2253 issuer=CN=AD-WIN2012R2-CA,DC=AD,DC=EXAMPLE,DC=COM", "ipa certmaprule-add simple_rule --matchrule '<ISSUER>CN= AD-WIN2012R2-CA,DC=AD,DC=EXAMPLE,DC=COM ' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})'", "(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})", "<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add rule_name --matchrule '<ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG' --maprule '(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})' ------------------------------------------------------- Added Certificate Identity Mapping Rule \"rule_name\" ------------------------------------------------------- Rule name: rule_name Mapping rule: (ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500}) Matching rule: <ISSUER>CN=Smart Card CA,O=EXAMPLE.ORG Enabled: TRUE", "systemctl restart sssd", "[root@server ~]# cat idm_user_certificate.pem -----BEGIN CERTIFICATE----- MIIFFTCCA/2gAwIBAgIBEjANBgkqhkiG9w0BAQsFADA6MRgwFgYDVQQKDA9JRE0u RVhBTVBMRS5DT00xHjAcBgNVBAMMFUNlcnRpZmljYXRlIEF1dGhvcml0eTAeFw0x ODA5MDIxODE1MzlaFw0yMDA5MDIxODE1MzlaMCwxGDAWBgNVBAoMD0lETS5FWEFN [...output truncated...]", "sss_cache -u user_name", "ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------", "kinit admin", "CERT=`cat idm_user_cert.pem | tail -n +2 | head -n -1 | tr -d '\\r\\n'\\` ipa user-add-certmapdata idm_user --certificate USDCERT", "ipa user-add-certmapdata idm_user --subject \" O=EXAMPLE.ORG,CN=test \" --issuer \" CN=Smart Card CA,O=EXAMPLE.ORG \" -------------------------------------------- Added certificate mappings to user \" idm_user \" -------------------------------------------- User login: idm_user Certificate mapping data: X509:<I>O=EXAMPLE.ORG,CN=Smart Card CA<S>CN=test,O=EXAMPLE.ORG", "sss_cache -u user_name", "ipa certmap-match idm_user_cert.pem -------------- 1 user matched -------------- Domain: IDM.EXAMPLE.COM User logins: idm_user ---------------------------- Number of entries returned 1 ----------------------------", "(userCertificate;binary={cert!bin})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "ad.example.com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add ad_configured_for_mapping_rule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500})' --domain=ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"ad_configured_for_mapping_rule\" ------------------------------------------------------- Rule name: ad_configured_for_mapping_rule Mapping rule: (altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "ldapsearch -o ldif-wrap=no -LLL -h adserver.ad.example.com -p 389 -D cn=Administrator,cn=users,dc=ad,dc=example,dc=com -W -b cn=users,dc=ad,dc=example,dc=com \"(cn=ad_user)\" altSecurityIdentities Enter LDAP Password: dn: CN=ad_user,CN=Users,DC=ad,DC=example,DC=com altSecurityIdentities: X509:<I>DC=com,DC=example,DC=ad,CN=AD-ROOT-CA<S>DC=com,DC=example,DC=ad,CN=Users,CN=ad_user", "(userCertificate;binary={cert!bin})", "<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com", "systemctl restart sssd", "kinit admin", "ipa certmaprule-add simpleADrule --matchrule '<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' --maprule '(userCertificate;binary={cert!bin})' --domain ad.example.com ------------------------------------------------------- Added Certificate Identity Mapping Rule \"simpleADrule\" ------------------------------------------------------- Rule name: simpleADrule Mapping rule: (userCertificate;binary={cert!bin}) Matching rule: <ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com Domain name: ad.example.com Enabled: TRUE", "systemctl restart sssd", "sss_cache -u [email protected]", "ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------", "kinit admin", "CERT=`cat ad_user_cert.pem | tail -n +2 | head -n -1 | tr -d '\\r\\n'\\` ipa idoverrideuser-add-cert [email protected] --certificate USDCERT", "sss_cache -u [email protected]", "ipa certmap-match ad_user_cert.pem -------------- 1 user matched -------------- Domain: AD.EXAMPLE.COM User logins: [email protected] ---------------------------- Number of entries returned 1 ----------------------------", "ipa certmaprule-add ad_cert_for_ipa_and_ad_users \\ --maprule='(|(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' \\ --matchrule='<ISSUER>CN=AD-ROOT-CA,DC=ad,DC=example,DC=com' \\ --domain=ad.example.com", "ipa certmaprule-add ipa_cert_for_ad_users --maprule='(|(userCertificate;binary={cert!bin})(ipacertmapdata=X509:<I>{issuer_dn!nss_x500}<S>{subject_dn!nss_x500})(altSecurityIdentities=X509:<I>{issuer_dn!ad_x500}<S>{subject_dn!ad_x500}))' --matchrule='<ISSUER>CN=Certificate Authority,O=REALM.EXAMPLE.COM' --domain=idm.example.com --domain=ad.example.com", "ipa-advise config-client-for-smart-card-auth > client_smart_card_script.sh", "chmod +x client_smart_card_script.sh", "./client_smart_card_script.sh CA_cert.pem", "ipa-cacert-manage -n \"SmartCard CA\" -t CT,C,C install ca.pem ipa-certupdate", "systemctl restart httpd", "client login: idm_user PIN for PIV Card Holder pin (PIV_II) for user [email protected]:", "ssh -I /usr/lib64/opensc-pkcs11.so -l idm_user server.idm.example.com Enter PIN for 'PIV_II (PIV Card Holder pin)': Last login: Thu Apr 6 12:49:32 2017 from 10.36.116.42", "ssh -I /usr/lib64/opensc-pkcs11.so -l [email protected] server.idm.example.com Enter PIN for 'PIV_II (PIV Card Holder pin)': Last login: Thu Apr 6 12:49:32 2017 from 10.36.116.42", "id uid=1928200001(idm_user) gid=1928200001(idm_user) groups=1928200001(idm_user) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023", "id uid=1171201116([email protected]) gid=1171201116([email protected]) groups=1171201116([email protected]),1171200513(domain [email protected]) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023", "kinit admin Password for [email protected]:", "ipa certmapconfig-mod --promptusername=TRUE Prompt for the username: TRUE", "ipa-advise config-client-for-smart-card-auth > client_smart_card_script.sh", "chmod +x client_smart_card_script.sh", "./client_smart_card_script.sh CA_cert.pem", "ipa-cacert-manage -n \"SmartCard CA\" -t CT,C,C install ca.pem ipa-certupdate", "systemctl restart httpd", "kinit -X X509_user_identity='PKCS11:opensc-pkcs11.so' idm_user", "[libdefaults] [... file truncated ...] pkinit_eku_checking = kpServerAuth pkinit_kdc_hostname = adserver.ad.domain.com", "Windows Registry Editor Version 5.00 [HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Kdc] \"UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors\"=dword:00000001 [HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Control\\LSA\\Kerberos\\Parameters] \"UseCachedCRLOnlyAndIgnoreRevocationUnknownErrors\"=dword:00000001", "kinit -X X509_user_identity='PKCS11:opensc-pkcs11.so' [email protected]", "ipa-advise config-server-for-smart-card-auth > server_smart_card_script.sh", "chmod +x server_smart_card_script.sh", "ipa-cacert-manage -n \"SmartCard CA\" -t CT,C,C install ca.pem ipa-certupdate", "systemctl restart httpd systemctl restart krb5kdc", "NSSRenegotiation NSSRequireSafeNegotiation on", "#! /usr/bin/env python def application(environ, start_response): status = '200 OK' response_body = \"\"\" <!DOCTYPE html> <html> <head> <title>Login</title> </head> <body> <form action='/app' method='get'> Username: <input type='text' name='username'> <input type='submit' value='Login with certificate'> </form> </body> </html> \"\"\" response_headers = [ ('Content-Type', 'text/html'), ('Content-Length', str(len(response_body))) ] start_response(status, response_headers) return [response_body]", "#! /usr/bin/env python def application(environ, start_response): try: user = environ['REMOTE_USER'] except KeyError: status = '400 Bad Request' response_body = 'Login failed.\\n' else: status = '200 OK' response_body = 'Login succeeded. Username: {}\\n'.format(user) response_headers = [ ('Content-Type', 'text/plain'), ('Content-Length', str(len(response_body))) ] start_response(status, response_headers) return [response_body]", "<IfModule !lookup_identity_module> LoadModule lookup_identity_module modules/mod_lookup_identity.so </IfModule> WSGIScriptAlias /login /var/www/app/login.py WSGIScriptAlias /app /var/www/app/protected.py <Location \"/app\"> NSSVerifyClient require NSSUserName SSL_CLIENT_CERT LookupUserByCertificate On LookupUserByCertificateParamName \"username\" </Location>", "ipa host-mod host_name --auth-ind= indicator", "ipa service-mod service / host_name --auth-ind= indicator", "ipa host-mod host.idm.example.com --auth-ind=pkinit", "mkdir ~/certdb/", "certutil -N -d ~/certdb/", "certutil -R -d ~/certdb/ -a -g 4096 -s \" CN=server.example.com,O=EXAMPLE.COM \" -8 server.example.com > certificate_request.csr", "otherName= 1.3.6.1.4.1.311.20.2.3 ;UTF8: test2/[email protected] DNS.1 = server.example.com", "openssl req -new -newkey rsa: 2048 -keyout test2service.key -sha256 -nodes -out certificate_request.csr -config openssl.conf", "ipa cert-request certificate_request.csr --principal= host/server.example.com", "ipa cert-revoke 1032 --revocation-reason=1", "ipa cert-remove-hold 1032", "ipa user-add-cert user --certificate= MIQTPrajQAwg", "ipa user-add-cert user --certificate=\"USD(openssl x509 -outform der -in user_cert.pem | base64 -w 0)\"", "ipa cert-find ----------------------- 10 certificates matched ----------------------- Serial number (hex): 0x1 Serial number: 1 Status: VALID Subject: CN=Certificate Authority,O=EXAMPLE.COM ----------------------------- Number of entries returned 10 -----------------------------", "ipa cert-find --issuedon-from=2020-01-07 --issuedon-to=2020-02-07", "ipa cert-show 132 Serial number: 132 Certificate: MIIDtzCCAp+gAwIBAgIBATANBgkqhkiG9w0BAQsFADBBMR8wHQYDVQQKExZMQUIu LxIQjrEFtJmoBGb/TWRlwGEWy1ayr4iTEf1ayZ+RGNylLalEAtk9RLjEjg== Subject: CN=Certificate Authority,O=EXAMPLE.COM Issuer: CN=Certificate Authority,O=EXAMPLE.COM Not Before: Sun Jun 08 05:51:11 2014 UTC Not After: Thu Jun 08 05:51:11 2034 UTC Serial number (hex): 0x132 Serial number: 132", "ipa user-show user User login: user Certificate: MIICfzCCAWcCAQA", "ipa cert-show certificate_serial_number --out= path_to_file", "ipa certprofile Manage Certificate Profiles EXAMPLES: Import a profile that will not store issued certificates: ipa certprofile-import ShortLivedUserCert --file UserCert.profile --desc \"User Certificates\" --store=false Delete a certificate profile: ipa certprofile-del ShortLivedUserCert", "ipa certprofile-mod --help Usage: ipa [global-options] certprofile-mod ID [options] Modify Certificate Profile configuration. Options: -h, --help show this help message and exit --desc=STR Brief description of this profile --store=BOOL Whether to store certs issued using this profile", "ipa certprofile-import Profile ID: smime Profile description: S/MIME certificates Store issued certificates [True]: TRUE Filename of a raw profile. The XML format is not supported.: smime.cfg ------------------------ Imported profile \"smime\" ------------------------ Profile ID: smime Profile description: S/MIME certificates Store issued certificates: TRUE", "ipa certprofile-import --file= smime.cfg", "ipa certprofile-show caIPAserviceCert --out= file_name", "ipa certprofile-find ------------------ 3 profiles matched ------------------ Profile ID: caIPAserviceCert Profile description: Standard profile for network services Store issued certificates: TRUE Profile ID: IECUserRoles", "ipa certprofile-show profile_ID Profile ID: profile_ID Profile description: S/MIME certificates Store issued certificates: TRUE", "ipa certprofile-mod profile_ID --desc=\"New description\" --store=False ------------------------------------ Modified Certificate Profile \"profile_ID\" ------------------------------------ Profile ID: profile_ID Profile description: New description Store issued certificates: FALSE", "ipa certprofile-mod profile_ID --file= new_configuration.cfg", "ipa certprofile-del profile_ID ----------------------- Deleted profile \"profile_ID\" -----------------------", "ipa caacl Manage CA ACL rules. EXAMPLES: Create a CA ACL \"test\" that grants all users access to the \"UserCert\" profile: ipa caacl-add test --usercat=all ipa caacl-add-profile test --certprofiles UserCert Display the properties of a named CA ACL: ipa caacl-show test Create a CA ACL to let user \"alice\" use the \"DNP3\" profile on \"DNP3-CA\": ipa caacl-add alice_dnp3 ipa caacl-add-ca alice_dnp3 --cas DNP3-CA ipa caacl-add-profile alice_dnp3 --certprofiles DNP3 ipa caacl-add-user alice_dnp3 --user=alice", "ipa caacl-mod --help Usage: ipa [global-options] caacl-mod NAME [options] Modify a CA ACL. Options: -h, --help show this help message and exit --desc=STR Description --cacat=['all'] CA category the ACL applies to --profilecat=['all'] Profile category the ACL applies to", "ipa caacl-add ACL name: smime_acl ------------------------ Added CA ACL \"smime_acl\" ------------------------ ACL name: smime_acl Enabled: TRUE", "ipa caacl-add ca_acl_name --usercat=all", "ipa cert-request CSR-FILE --principal user --profile-id profile_id ipa: ERROR Insufficient access: Principal 'user' is not permitted to use CA '.' with profile 'profile_id' for certificate issuance.", "ipa caacl-find ----------------- 2 CA ACLs matched ----------------- ACL name: hosts_services_caIPAserviceCert Enabled: TRUE", "ipa caacl-show ca_acl_name ACL name: ca_acl_name Enabled: TRUE Host category: all", "ipa caacl-mod ca_acl_name --desc=\"New description\" --profilecat=all --------------------------- Modified CA ACL \"ca_acl_name\" --------------------------- ACL name: smime_acl Description: New description Enabled: TRUE Profile category: all", "ipa caacl-disable ca_acl_name --------------------------- Disabled CA ACL \"ca_acl_name\" ---------------------------", "ipa caacl-enable ca_acl_name --------------------------- Enabled CA ACL \"ca_acl_name\" ---------------------------", "ipa caacl-del ca_acl_name", "ipa caacl-add-user ca_acl_name --groups= group_name", "ipa caacl-add-user ca_acl_name --users= user_name ipa: ERROR: users cannot be added when user category='all'", "ipa cert-request CSR-FILE --principal user --profile-id profile_id ipa: ERROR Insufficient access: Principal 'user' is not permitted to use CA '.' with profile 'profile_id' for certificate issuance.", "ipa caacl-add-user --help", "ipa certprofile-import certificate_profile --file= certificate_profile.cfg --store=True", "ipa caacl-add users_certificate_profile --usercat=all", "ipa caacl-add-profile users_certificate_profile --certprofiles= certificate_profile", "openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout private.key -out cert.csr -subj '/CN= user '", "ipa cert-request cert.csr --principal= user --profile-id= certificate_profile", "ipa user-show user User login: user Certificate: MIICfzCCAWcCAQA", "ipa certprofile-import certificate_profile --file= certificate_profile.txt --store=True", "ipa vault-show my_vault Vault name: my_vault Type: standard Owner users: user Vault user: user", "ipa-kra-install", "ipa help vault", "ipa vault-add --help", "ipa vault-show user_vault --user user", "[admin@server ~]USD ipa vault-show user_vault ipa: ERROR: user_vault: vault not found", "kinit user", "ipa vault-add my_vault --type standard ---------------------- Added vault \"my_vault\" ---------------------- Vault name: my_vault Type: standard Owner users: user Vault user: user", "ipa vault-archive my_vault --in secret.txt ----------------------------------- Archived data into vault \"my_vault\" -----------------------------------", "kinit user", "ipa vault-retrieve my_vault --out secret_exported.txt -------------------------------------- Retrieved data from vault \"my_vault\" --------------------------------------", "kinit admin", "ipa vault-add http_password --type standard --------------------------- Added vault \"http_password\" --------------------------- Vault name: http_password Type: standard Owner users: admin Vault user: admin", "ipa vault-archive http_password --in password.txt ---------------------------------------- Archived data into vault \"http_password\" ----------------------------------------", "kinit admin", "openssl genrsa -out service-private.pem 2048 Generating RSA private key, 2048 bit long modulus .+++ ...........................................+++ e is 65537 (0x10001)", "openssl rsa -in service-private.pem -out service-public.pem -pubout writing RSA key", "ipa vault-add password_vault --service HTTP/server.example.com --type asymmetric --public-key-file service-public.pem ---------------------------- Added vault \"password_vault\" ---------------------------- Vault name: password_vault Type: asymmetric Public key: LS0tLS1C...S0tLS0tCg== Owner users: admin Vault service: HTTP/[email protected]", "ipa vault-retrieve http_password --out password.txt ----------------------------------------- Retrieved data from vault \"http_password\" -----------------------------------------", "ipa vault-archive password_vault --service HTTP/server.example.com --in password.txt ----------------------------------- Archived data into vault \"password_vault\" -----------------------------------", "kinit admin", "kinit HTTP/server.example.com -k -t /etc/httpd/conf/ipa.keytab", "ipa vault-retrieve password_vault --service HTTP/server.example.com --private-key-file service-private.pem --out password.txt ------------------------------------ Retrieved data from vault \"password_vault\" ------------------------------------", "ipa vault-archive http_password --in new_password.txt ---------------------------------------- Archived data into vault \"http_password\" ----------------------------------------", "ipa vault-retrieve http_password --out password.txt ----------------------------------------- Retrieved data from vault \"http_password\" -----------------------------------------", "ipa vault-archive password_vault --service HTTP/server.example.com --in password.txt ----------------------------------- Archived data into vault \"password_vault\" -----------------------------------", "kinit admin", "ipa vault-add shared_vault --shared --type standard --------------------------- Added vault \"shared_vault\" --------------------------- Vault name: shared_vault Type: standard Owner users: admin Shared vault: True", "ipa vault-archive shared_vault --shared --in secret.txt ----------------------------------- Archived data into vault \"shared_vault\" -----------------------------------", "ipa vault-add-member shared_vault --shared --users={user1,user2} Vault name: shared_vault Type: standard Owner users: admin Shared vault: True Member users: user1, user2 ------------------------- Number of members added 2 -------------------------", "kinit user1", "ipa vault-retrieve shared_vault --shared --out secret_exported.txt ----------------------------------------- Retrieved data from vault \"shared_vault\" -----------------------------------------", "ipa vault-mod --change-password Vault name: example_symmetric_vault Password: old_password New password: new_password Enter New password again to verify: new_password ----------------------- Modified vault \" example_symmetric_vault \" ----------------------- Vault name: example_symmetric_vault Type: symmetric Salt: dT+M+4ik/ltgnpstmCG1sw== Owner users: admin Vault user: admin", "ipa vault-mod example_asymmetric_vault --private-key-file= old_private_key.pem --public-key-file= new_public_key.pem ------------------------------- Modified vault \" example_assymmetric_vault \" ------------------------------- Vault name: example_assymmetric_vault Typ: asymmetric Public key: Owner users: admin Vault user: admin", "ipa ca-add vpn-ca --subject=\" CN=VPN,O=IDM.EXAMPLE.COM \" ------------------- Created CA \"vpn-ca\" ------------------- Name: vpn-ca Authority ID: ba83f324-5e50-4114-b109-acca05d6f1dc Subject DN: CN=VPN,O=IDM.EXAMPLE.COM Issuer DN: CN=Certificate Authority,O=IDM.EXAMPLE.COM", "certutil -d /etc/pki/pki-tomcat/alias/ -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI caSigningCert cert-pki-ca CTu,Cu,Cu Server-Cert cert-pki-ca u,u,u auditSigningCert cert-pki-ca u,u,Pu caSigningCert cert-pki-ca ba83f324-5e50-4114-b109-acca05d6f1dc u,u,u ocspSigningCert cert-pki-ca u,u,u subsystemCert cert-pki-ca u,u,u", "ipa ca-del vpn-ca ------------------- Deleted CA \"vpn-ca\" -------------------", "ipa-certupdate trying https://idmserver.idm.example.com/ipa/json Forwarding 'schema' to json server 'https://idmserver.idm.example.com/ipa/json' trying https://idmserver.idm.example.com/ipa/json Forwarding 'ca_is_enabled' to json server 'https://idmserver.idm.example.com/ipa/json' Forwarding 'ca_find/1' to json server 'https://idmserver.idm.example.com/ipa/json' Systemwide CA database updated. Systemwide CA database updated. The ipa-certupdate command was successful", "Certificate named \"NSS Certificate DB\" in token \"auditSigningCert cert-pki-ca\" in database \"/var/lib/pki-ca/alias\" renew success", "certmonger: Certificate named \"NSS Certificate DB\" in token \"auditSigningCert cert-pki-ca\" in database \"/var/lib/pki-ca/alias\" will not be valid after 20160204065136.", "certutil -L -d /etc/pki/pki-tomcat/alias", "ipa-cacert-manage renew --external-cert-file= /tmp/servercert20110601.pem --external-cert-file= /tmp/cacert.pem", "certutil -L -d /etc/pki/pki-tomcat/alias/", "ipa-cert-fix The following certificates will be renewed: Dogtag sslserver certificate: Subject: CN=ca1.example.com,O=EXAMPLE.COM 201905222205 Serial: 13 Expires: 2019-05-12 05:55:47 Enter \"yes\" to proceed:", "Enter \"yes\" to proceed: yes Proceeding. Renewed Dogtag sslserver certificate: Subject: CN=ca1.example.com,O=EXAMPLE.COM 201905222205 Serial: 268369925 Expires: 2021-08-14 02:19:33 Becoming renewal master. The ipa-cert-fix command was successful", "ipactl status Directory Service: RUNNING krb5kdc Service: RUNNING kadmin Service: RUNNING httpd Service: RUNNING ipa-custodia Service: RUNNING pki-tomcatd Service: RUNNING ipa-otpd Service: RUNNING ipa: INFO: The ipactl command was successful", "ipactl restart --force", "getcert list | egrep '^Request|status:|subject:' Request ID '20190522120745': status: MONITORING subject: CN=IPA RA,O=EXAMPLE.COM 201905222205 Request ID '20190522120834': status: MONITORING subject: CN=Certificate Authority,O=EXAMPLE.COM 201905222205", "Request ID '20190522120835': status: CA_UNREACHABLE subject: CN=ca2.example.com,O=EXAMPLE.COM 201905222205", "ipa-cert-fix Dogtag sslserver certificate: Subject: CN=ca2.example.com,O=EXAMPLE.COM Serial: 3 Expires: 2019-05-11 12:07:11 Enter \"yes\" to proceed: yes Proceeding. Renewed Dogtag sslserver certificate: Subject: CN=ca2.example.com,O=EXAMPLE.COM 201905222205 Serial: 15 Expires: 2019-08-14 04:25:05 The ipa-cert-fix command was successful", "ipa-cacert-manage install /etc/group/cert.pem", "NSSEnforceValidCerts off", "systemctl restart httpd.service", "ldapsearch -h server.example.com -p 389 -D \"cn=directory manager\" -w secret -LLL -b cn=config -s base \"(objectclass=*)\" nsslapd-validate-cert dn: cn=config nsslapd-validate-cert: warn", "ldapmodify -D \"cn=directory manager\" -w secret -p 389 -h server.example.com dn: cn=config changetype: modify replace: nsslapd-validate-cert nsslapd-validate-cert: warn", "systemctl restart dirsrv.target", "ipa-server-certinstall --http --dirsrv ssl.key ssl.crt", "systemctl restart httpd.service", "systemctl restart dirsrv@ REALM .service", "certutil -L -d /etc/httpd/alias", "certutil -L -d /etc/dirsrv/slapd- REALM /", "systemctl stop [email protected]", "ca.crl.MasterCRL.autoUpdateInterval=60", "systemctl start [email protected]", "[root@ipa-server ~] ipa-ca-install", "[root@ipa-server ~] ipa-ca-install --external-ca", "ipa-ca-install --external-cert-file=/root/ master .crt --external-cert-file=/root/ca.crt", "openssl req -new -newkey rsa:2048 -days 365 -nodes -keyout new.key -out new.csr -subj '/CN= idmserver.idm.example.com ,O= IDM.EXAMPLE.COM '", "ipa-server-certinstall -w --pin= password new.key new.crt", "ipa-server-certinstall -d --pin= password new.key new.cert", "ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]", "ipa-pkinit-manage status PKINIT is enabled The ipa-pkinit-manage command was successful", "ipa config-show Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers [...output truncated...] IPA masters capable of PKINIT: server1.example.com [...output truncated...]", "kinit admin Password for [email protected]: ipa pkinit-status --server=server.idm.example.com ---------------- 1 server matched ---------------- Server name: server.idm.example.com PKINIT status: enabled ---------------------------- Number of entries returned 1 ----------------------------", "ipa pkinit-status --server server.idm.example.com ----------------- 0 servers matched ----------------- ---------------------------- Number of entries returned 0 ----------------------------", "ipa-cacert-manage install -t CT,C,C ca.pem", "ipa-certupdate", "ipa-cacert-manage list CN=CA,O=Example Organization The ipa-cacert-manage command was successful", "ipa-server-certinstall --kdc kdc.pem kdc.key systemctl restart krb5kdc.service", "ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]", "ipa-pkinit-manage enable Configuring Kerberos KDC (krb5kdc) [1/1]: installing X509 Certificate for PKINIT Done configuring Kerberos KDC (krb5kdc). The ipa-pkinit-manage command was successful", "ipa pwpolicy-mod --minclasses= 1", "ipa pwpolicy-add Group: group_name Priority: priority_level", "ipa pwpolicy-find", "ipa pwpolicy-mod --minlength=10", "ipa pwpolicy-mod group_name --minlength=10", "ipa pwpolicy-show", "ipa pwpolicy-show group_name", "ipa user-mod user_name --password-expiration='2016-02-03 20:37:34Z'", "ldapmodify -D \"cn=Directory Manager\" -w secret -h server.example.com -p 389 -vv dn: uid= user_name ,cn=users,cn=accounts,dc= example ,dc= com changetype: modify replace: krbPasswordExpiration krbPasswordExpiration: 20160203203734Z", "kinit user_name -l 90000", "ipa krbtpolicy-mod --maxlife= 80000 Max life: 80000 Max renew: 604800", "ipa krbtpolicy-mod admin --maxlife= 160000 Max life: 80000 Max renew: 604800", "ldapsearch -x -b \"cn=computers,cn=accounts,dc=example,dc=com\" \"(&(krblastpwdchange>=20160101000000)(krblastpwdchange<=20161231235959))\" dn krbprincipalname", "ldapsearch -x -b \"cn=services,cn=accounts,dc=example,dc=com\" \"(&(krblastpwdchange>=20160101000000)(krblastpwdchange<=20161231235959))\" dn krbprincipalname", "ipa-getkeytab -p host/ [email protected] -s server.example.com -k /etc/krb5.keytab", "ipa-getkeytab -p HTTP/ [email protected] -s server.example.com -k /etc/httpd/conf/ipa.keytab", "klist -kt /etc/krb5.keytab Keytab: WRFILE:/etc/krb5.keytab KVNO Timestamp Principal ---- ----------------- -------------------------------------------------------- 1 06/09/16 05:58:47 host/[email protected](aes256-cts-hmac-sha1-96) 2 06/09/16 11:23:01 host/[email protected](aes256-cts-hmac-sha1-96) 1 03/09/16 13:57:16 krbtgt/[email protected](aes256-cts-hmac-sha1-96) 1 03/09/16 13:57:16 HTTP/[email protected](aes256-cts-hmac-sha1-96) 1 03/09/16 13:57:16 ldap/[email protected](aes256-cts-hmac-sha1-96)", "chown apache /etc/httpd/conf/ipa.keytab", "chmod 0600 /etc/httpd/conf/ipa.keytab", "ipa-rmkeytab --realm EXAMPLE.COM --keytab /etc/krb5.keytab", "ipa-rmkeytab --principal ldap/client.example.com --keytab /etc/krb5.keytab", "ipa sudorule-add-option sudo_rule_name Sudo Option: first_option ipa sudorule-add-option sudo_rule_name Sudo Option: second_option", "ipa sudorule-add-option sudo_rule_name Sudo Option: env_keep=\"COLORS DISPLAY EDITOR HOSTNAME HISTSIZE INPUTRC KDEDIR LESSSECURE LS_COLORS MAIL PATH PS1 PS2 XAUTHORITY\"", "sudoers: files sss", "vim /etc/nsswitch.conf sudoers: files sss", "vim /etc/sssd/sssd.conf [sssd] config_file_version = 2 services = nss, pam, sudo domains = IPADOMAIN", "systemctl enable rhel-domainname.service", "nisdomainname example.com", "echo \"NISDOMAIN= example.com \" >> /etc/sysconfig/network", "systemctl restart rhel-domainname.service", "[domain/ IPADOMAIN ] debug_level = 6 .", "ipa sudocmd-add /usr/bin/less --desc=\"For reading log files\" ---------------------------------- Added sudo command \"/usr/bin/less\" ---------------------------------- sudo Command: /usr/bin/less Description: For reading log files", "ipa sudocmdgroup-add files --desc=\"File editing commands\" ----------------------------------- Added sudo command group \"files\" ----------------------------------- sudo Command Group: files Description: File editing commands", "ipa sudocmdgroup-add-member files --sudocmds \"/usr/bin/vim\" sudo Command Group: files Description: File editing commands Member sudo commands: /usr/bin/vim ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add files-commands -------------------------------- Added Sudo Rule \"files-commands\" -------------------------------- Rule name: files-commands Enabled: TRUE", "ipa sudocmd-mod /usr/bin/less --desc=\"For reading log files\" ------------------------------------- Modified Sudo Command \"/usr/bin/less\" ------------------------------------- Sudo Command: /usr/bin/less Description: For reading log files Sudo Command Groups: files", "ipa sudorule-mod sudo_rule_name --desc=\" sudo_rule_description \"", "ipa sudorule-mod sudo_rule_name --order= 3", "ipa sudorule-mod sudo_rule --usercat=all", "ipa sudorule-add-option files-commands Sudo Option: !authenticate --------------------------------------------------------- Added option \"!authenticate\" to Sudo Rule \"files-commands\" ---------------------------------------------------------", "ipa sudorule-remove-option files-commands Sudo Option: authenticate ------------------------------------------------------------- Removed option \"authenticate\" from Sudo Rule \"files-commands\" -------------------------------------------------------------", "ipa sudorule-add-user files-commands --users=user --groups=user_group ------------------------- Number of members added 2 -------------------------", "ipa sudorule-remove-user files-commands [member user]: user [member group]: --------------------------- Number of members removed 1 ---------------------------", "ipa sudorule-add-host files-commands --hosts=example.com --hostgroups=host_group ------------------------- Number of members added 2 -------------------------", "ipa sudorule-remove-host files-commands [member host]: example.com [member host group]: --------------------------- Number of members removed 1 ---------------------------", "ipa sudorule-add-allow-command files-commands --sudocmds=/usr/bin/less --sudocmdgroups=files ------------------------- Number of members added 2 -------------------------", "ipa sudorule-remove-allow-command files-commands [member sudo command]: /usr/bin/less [member sudo command group]: --------------------------- Number of members removed 1 ---------------------------", "ipa sudorule-add-runasuser files-commands --users=user RunAs Users: user", "kinit admin Password for [email protected]:", "ipa sudorule-add new_sudo_rule --desc=\"Rule for user_group\" --------------------------------- Added Sudo Rule \"new_sudo_rule\" --------------------------------- Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE", "ipa sudorule-add-user new_sudo_rule --groups=user_group Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE User Groups: user_group ------------------------- Number of members added 1 -------------------------", "ipa sudorule-add-host new_sudo_rule --hostgroups=host_group Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE User Groups: user_group Host Groups: host_group ------------------------- Number of members added 1 -------------------------", "ipa sudorule-mod new_sudo_rule --cmdcat=all ------------------------------ Modified Sudo Rule \"new_sudo_rule\" ------------------------------ Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group", "ipa sudorule-add-option new_sudo_rule Sudo Option: !authenticate ----------------------------------------------------- Added option \"!authenticate\" to Sudo Rule \"new_sudo_rule\" ----------------------------------------------------- Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group Sudo Option: !authenticate", "ipa sudorule-show new_sudo_rule Rule name: new_sudo_rule Description: Rule for user_group Enabled: TRUE Command category: all User Groups: user_group Host Groups: host_group Sudo Option: !authenticate", "ipa sudocmd-show /usr/bin/less Sudo Command: /usr/bin/less Description: For reading log files. Sudo Command Groups: files", "ipa sudorule-disable sudo_rule_name ----------------------------------- Disabled Sudo Rule \"sudo_rule_name\" -----------------------------------", "ipa sudorule-enable sudo_rule_name ----------------------------------- Enabled Sudo Rule \"sudo_rule_name\" -----------------------------------", "ipa hbacrule-add Rule name: rule_name --------------------------- Added HBAC rule \"rule_name\" --------------------------- Rule name: rule_name Enabled: TRUE", "ipa hbacrule-add-user Rule name: rule_name [member user]: [member group]: group_name Rule name: rule_name Enabled: TRUE User Groups: group_name ------------------------- Number of members added 1 -------------------------", "ipa hbacrule-add-user rule_name --users= user1 --users= user2 --users= user3 Rule name: rule_name Enabled: TRUE Users: user1, user2, user3 ------------------------- Number of members added 3 -------------------------", "ipa hbacrule-mod rule_name --usercat=all ------------------------------ Modified HBAC rule \"rule_name\" ------------------------------ Rule name: rule_name User category: all Enabled: TRUE", "ipa hbacrule-add-host Rule name: rule_name [member host]: host.example.com [member host group]: Rule name: rule_name Enabled: TRUE Hosts: host.example.com ------------------------- Number of members added 1 -------------------------", "ipa hbacrule-add-host rule_name --hosts= host1 --hosts= host2 --hosts= host3 Rule name: rule_name Enabled: TRUE Hosts: host1, host2, host3 ------------------------- Number of members added 3 -------------------------", "ipa hbacrule-mod rule_name --hostcat=all ------------------------------ Modified HBAC rule \"rule_name\" ------------------------------ Rule name: rule_name Host category: all Enabled: TRUE", "ipa hbacrule-add-service Rule name: rule_name [member HBAC service]: ftp [member HBAC service group]: Rule name: rule_name Enabled: TRUE Services: ftp ------------------------- Number of members added 1 -------------------------", "ipa hbacrule-add-service rule_name --hbacsvcs= su --hbacsvcs= sudo Rule name: rule_name Enabled: TRUE Services: su, sudo ------------------------- Number of members added 2 -------------------------", "ipa hbacrule-mod rule_name --servicecat=all ------------------------------ Modified HBAC rule \"rule_name\" ------------------------------ Rule name: rule_name Service category: all Enabled: TRUE", "ipa hbactest User name: user1 Target host: example.com Service: sudo --------------------- Access granted: False --------------------- Not matched rules: rule1", "ipa hbactest --user= user1 --host= example.com --service= sudo --rules= rule1 --------------------- Access granted: False --------------------- Not matched rules: rule1", "ipa hbactest --user= user1 --host= example.com --service= sudo --rules= rule1 --rules= rule2 -------------------- Access granted: True -------------------- Matched rules: rule2 Not matched rules: rule1", "ipa hbacrule-disable allow_all ------------------------------ Disabled HBAC rule \"allow_all\" ------------------------------", "ipa hbacsvc-add tftp ------------------------- Added HBAC service \"tftp\" ------------------------- Service name: tftp", "ipa hbacsvcgroup-add Service group name: login -------------------------------- Added HBAC service group \"login\" -------------------------------- Service group name: login", "ipa hbacsvcgroup-add-member Service group name: login [member HBAC service]: sshd Service group name: login Member HBAC service: sshd ------------------------- Number of members added 1 -------------------------", "semanage user -l Labelling MLS/ MLS/ SELinux User Prefix MCS Level MCS Range SELinux Roles guest_u user s0 s0 guest_r root user s0 s0-s0:c0.c1023 staff_r sysadm_r system_r unconfined_r staff_u user s0 s0-s0:c0.c1023 staff_r sysadm_r system_r unconfined_r sysadm_u user s0 s0-s0:c0.c1023 sysadm_r system_u user s0 s0-s0:c0.c1023 system_r unconfined_r unconfined_u user s0 s0-s0:c0.c1023 system_r unconfined_r user_u user s0 s0 user_r xguest_u user s0 s0 xguest_r", "SELinux_user:MLS[:MCS]", "[user1]@server ~]USD ipa config-show SELinux user map order: guest_u:s0USDxguest_u:s0USDuser_u:s0USDstaff_u:s0-s0:c0.c1023USDunconfined_u:s0-s0:c0.c1023 Default SELinux user: unconfined_u:s0-s0:c0.c1023", "[user1@server ~]USD ipa config-mod --ipaselinuxusermaporder=\"unconfined_u:s0-s0:c0.c1023USDguest_u:s0USDxguest_u:s0USDuser_u:s0-s0:c0.c1023USDstaff_u:s0-s0:c0.c1023\"", "[user1@server ~]USD ipa config-mod --ipaselinuxusermapdefault=\"guest_u:s0\"", "[user1@server ~]USD ipa selinuxusermap-add --selinuxuser=\"xguest_u:s0\" selinux1 [user1@server ~]USD ipa selinuxusermap-add-user --users=user1 --users=user2 --users=user3 selinux1 [user1@server ~]USD ipa selinuxusermap-add-host --hosts=server.example.com --hosts=test.example.com selinux1", "[user1@server ~]USD ipa selinuxusermap-add --hbacrule=webserver --selinuxuser=\"xguest_u:s0\" selinux1", "[user1@server ~]USD ipa selinuxusermap-add-user --users=user1 selinux1", "[user1@server ~]USD ipa selinuxusermap-remove-user --users=user2 selinux1", "dn: idnsname=client1,idnsname=example.com.,cn=dns,dc=idm,dc=example,dc=com objectclass: top objectclass: idnsrecord idnsname: client1 Arecord: 192.0.2.1 Arecord: 192.0.2.2 Arecord: 192.0.2.3 AAAArecord: 2001:DB8::ABCD", "ipa dnszone-add newserver.example.com", "ipa dnszone-del server.example.com", "[user@server ~]USD ipa dnszone-mod --allow-transfer=\"192.0.2.1;198.51.100.1;203.0.113.1\" example.com", "dig @ipa-server zone_name AXFR", "host -t MX mail.example.com. mail.example.com mail is handled by 10 server.example.com. host -t MX demo.example.com. demo.example.com. has no MX record. host -t A mail.example.com. mail.example.com has no A record host -t A demo.example.com. random.example.com has address 192.168.1.1", "ipa dnsrecord-add zone_name record_name -- record_type_option=data", "ipa dnsrecord-add example.com www --a-rec 192.0.2.123", "ipa dnsrecord-add example.com \"*\" --a-rec 192.0.2.123", "ipa dnsrecord-mod example.com www --a-rec 192.0.2.123 --a-ip-address 192.0.2.1", "ipa dnsrecord-add example.com www --aaaa-rec 2001:db8::1231:5675", "ipa dnsrecord-add server.example.com _ldap._tcp --srv-rec=\"0 51 389 server1.example.com.\" ipa dnsrecord-add server.example.com _ldap._tcp --srv-rec=\"1 49 389 server2.example.com.\"", "ipa dnsrecord-add reverseNetworkIpAddress hostIpAddress --ptr-rec FQDN", "ipa dnsrecord-add 2.0.192.in-addr.arpa 4 --ptr-rec server4.example.com.", "ipa dnsrecord-add 0.0.0.0.0.0.0.0.8.b.d.0.1.0.0.2.ip6.arpa. 1.1.1.0.0.0.0.0.0.0.0.0.0.0.0 --ptr-rec server2.example.com.", "ipa dnsrecord-del example.com www --a-rec 192.0.2.1", "[user@server ~]USD ipa dnszone-disable zone.example.com ----------------------------------------- Disabled DNS zone \"example.com\" -----------------------------------------", "[user@server ~]USD ipa dnszone-mod server.example.com --dynamic-update=TRUE", "ipa-client-install --enable-dns-updates", "vim /etc/sssd/sssd.conf", "[domain/ipa.example.com]", "dyndns_update = true", "dyndns_ttl = 2400", "ipa dnszone-mod idm.example.com. --dynamic-update=TRUE", "ipa dnszone-mod idm.example.com. --update-policy='grant IDM.EXAMPLE.COM krb5-self * A; grant IDM.EXAMPLE.COM krb5-self * AAAA; grant IDM.EXAMPLE.COM krb5-self * SSHFP;'", "ipa dnszone-mod idm.example.com. --allow-sync-ptr=True", "ipa dnszone-mod 2.0.192.in-addr.arpa. --dynamic-update=TRUE", "ipa dnsconfig-mod --allow-sync-ptr=true", "dyndb \"ipa\" \"/usr/lib64/bind/ldap.so\" { sync_ptr yes; };", "ipactl restart", "ipa dnszone-mod zone.example.com --update-policy \"grant EXAMPLE.COM krb5-self * A; grant EXAMPLE.COM krb5-self * AAAA; grant EXAMPLE.COM krb5-self * SSHFP;\"", "ipa dnsrecord-add idm.example.com. sub_zone1 --ns-rec= 192.0.2.1", "ipa dnsforwardzone-add sub_zone1 .idm.example.com. --forwarder 192.0.2.1", "[user@server ~]USD ipa dnsconfig-mod --forwarder=192.0.2.254 Global forwarders: 192.0.2.254", "ipa dnsforwardzone-add --help", "[user@server ~]USD ipa dnsforwardzone-add zone.test. --forwarder=172.16.0.1 --forwarder=172.16.0.2 --forward-policy=first Zone name: zone.test. Zone forwarders: 172.16.0.1, 172.16.0.2 Forward policy: first", "[user@server ~]USD ipa dnsforwardzone-mod zone.test. --forwarder=172.16.0.3 Zone name: zone.test. Zone forwarders: 172.16.0.3 Forward policy: first", "[user@server ~]USD ipa dnsforwardzone-mod zone.test. --forward-policy=only Zone name: zone.test. Zone forwarders: 172.16.0.3 Forward policy: only", "[user@server ~]USD ipa dnsforwardzone-show zone.test. Zone name: zone.test. Zone forwarders: 172.16.0.5 Forward policy: first", "[user@server ~]USD ipa dnsforwardzone-find zone.test. Zone name: zone.test. Zone forwarders: 172.16.0.3 Forward policy: first ---------------------------- Number of entries returned 1 ----------------------------", "[user@server ~]USD ipa dnsforwardzone-del zone.test. ---------------------------- Deleted forward DNS zone \"zone.test.\" ----------------------------", "[user@server ~]USD ipa dnsforwardzone-enable zone.test. ---------------------------- Enabled forward DNS zone \"zone.test.\" ----------------------------", "[user@server ~]USD ipa dnsforwardzone-disable zone.test. ---------------------------- Disabled forward DNS zone \"zone.test.\" ----------------------------", "[user@server ~]USD ipa dnsforwardzone-add-permission zone.test. --------------------------------------------------------- Added system permission \"Manage DNS zone zone.test.\" --------------------------------------------------------- Manage DNS zone zone.test.", "[user@server ~]USD ipa dnsforwardzone-remove-permission zone.test. --------------------------------------------------------- Removed system permission \"Manage DNS zone zone.test.\" --------------------------------------------------------- Manage DNS zone zone.test.", "[user@server]USD ipa dnszone-add 2.0.192.in-addr.arpa.", "[user@server ~]USD ipa dnszone-add --name-from-ip= 192.0.2.0/24", "[user@server ~]USD ipa dnszone-mod --allow-query=192.0.2.0/24;2001:DB8::/32;203.0.113.1 example.com", "dig -t SRV +short _kerberos._tcp.idm.example.com 0 100 88 idmserver-01.idm.example.com. 0 100 88 idmserver-02.idm.example.com.", "dig -t SRV +short _kerberos._tcp.idm.example.com _kerberos._tcp.germany._locations.idm.example.com. 0 100 88 idmserver-01.idm.example.com. 50 100 88 idmserver-02.idm.example.com.", "ipa location-add germany ---------------------------- Added IPA location \"germany\" ---------------------------- Location name: germany", "systemctl restart named-pkcs11", "ipa location-find ----------------------- 2 IPA locations matched ----------------------- Location name: australia Location name: germany ----------------------------- Number of entries returned: 2 -----------------------------", "ipa server-mod idmserver-01.idm.example.com --location=germany ipa: WARNING: Service named-pkcs11.service requires restart on IPA server idmserver-01.idm.example.com to apply configuration changes. -------------------------------------------------- Modified IPA server \"idmserver-01.idm.example.com\" -------------------------------------------------- Servername: idmserver-01.idm.example.com Min domain level: 0 Max domain level: 1 Location: germany Enabled server roles: DNS server, NTP server", "systemctl restart named-pkcs11", "nameserver 10.10.0.1 nameserver 10.10.0.2", "nameserver 10.50.0.1 nameserver 10.50.0.3", "nameserver 10.30.0.1", "nameserver 10.30.0.1", "ipa dns-update-system-records --dry-run IPA DNS records: _kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. [... output truncated ...]", "ipa dns-update-system-records --dry-run --out dns_records_file.nsupdate IPA DNS records: _kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. _kerberos-master._udp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. [... output truncated ...]", "cat dns_records_file.nsupdate zone example.com. server 192.0.2.1 ; IPA DNS records update delete _kerberos-master._tcp.example.com. SRV update add _kerberos-master._tcp.example.com. 86400 IN SRV 0 100 88 ipa.example.com. [... output truncated ...]", "nsupdate -k tsig_key.file dns_records_file.nsupdate", "nsupdate -y algorithm:keyname:secret dns_records_file.nsupdate", "kinit principal_allowed_to_update_records @ REALM nsupdate -g dns_records_file.nsupdate", "search example.com ; the IdM server nameserver 192.0.2.1 ; backup DNS servers nameserver 198.51.100.1 nameserver 198.51.100.2", "dn: automountmapname=auto.master,cn=default,cn=automount,dc=example,dc=com objectClass: automountMap objectClass: top automountMapName: auto.master", "ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/nsswitch.conf Configured /etc/sysconfig/nfs Configured /etc/idmapd.conf Started rpcidmapd Started rpcgssd Restarting sssd, waiting for it to become available. Started autofs", "ipa-client-automount --server=ipaserver.example.com --location=boston", "autofs_provider = ipa ipa_automount_location = default", "automount: sss files", "ipa-client-automount --no-sssd", "# Other common LDAP naming # MAP_OBJECT_CLASS=\"automountMap\" ENTRY_OBJECT_CLASS=\"automount\" MAP_ATTRIBUTE=\"automountMapName\" ENTRY_ATTRIBUTE=\"automountKey\" VALUE_ATTRIBUTE=\"automountInformation\"", "LDAP_URI=\"ldap:///dc=example,dc=com\"", "LDAP_URI=\"ldap://ipa.example.com\" SEARCH_BASE=\"cn= location ,cn=automount,dc=example,dc=com\"", "<autofs_ldap_sasl_conf usetls=\"no\" tlsrequired=\"no\" authrequired=\"yes\" authtype=\"GSSAPI\" clientprinc=\"host/[email protected]\" />", "vim /etc/sssd/sssd.conf", "[sssd] services = nss,pam, autofs", "[nss] [pam] [sudo] [autofs] [ssh] [pac]", "[domain/EXAMPLE] ldap_search_base = \"dc=example,dc=com\" ldap_autofs_search_base = \"ou=automount,dc=example,dc=com\"", "systemctl restart sssd.service", "automount: sss files", "systemctl restart autofs.service", "ls /home/ userName", "automount -f -d", "NFS_CLIENT_VERSMAX=3", "ldapclient -v manual -a authenticationMethod=none -a defaultSearchBase=dc=example,dc=com -a defaultServerList=ipa.example.com -a serviceSearchDescriptor=passwd:cn=users,cn=accounts,dc=example,dc=com -a serviceSearchDescriptor=group:cn=groups,cn=compat,dc=example,dc=com -a serviceSearchDescriptor=auto_master:automountMapName=auto.master,cn= location ,cn=automount,dc=example,dc=com?one -a serviceSearchDescriptor=auto_home:automountMapName=auto_home,cn= location ,cn=automount,dc=example,dc=com?one -a objectClassMap=shadow:shadowAccount=posixAccount -a searchTimelimit=15 -a bindTimeLimit=5", "svcadm enable svc:/system/filesystem/autofs", "ldapclient -l auto_master dn: automountkey=/home,automountmapname=auto.master,cn= location ,cn=automount,dc=example,dc=com objectClass: automount objectClass: top automountKey: /home automountInformation: auto.home", "ls /home/ userName", "ldapmodify -x -D \"cn=directory manager\" -w password -h ipaserver.example.com -p 389 dn: cn= REALM_NAME ,cn=kerberos,dc=example,dc=com changetype: modify add: krbSupportedEncSaltTypes krbSupportedEncSaltTypes: des-cbc-crc:normal - add: krbSupportedEncSaltTypes krbSupportedEncSaltTypes: des-cbc-crc:special - add: krbDefaultEncSaltTypes krbDefaultEncSaltTypes: des-cbc-crc:special", "allow_weak_crypto = true", "kinit admin", "ipa service-add nfs/ nfs-server.example.com", "ipa-getkeytab -s ipaserver.example.com -p nfs/ nfs-server.example.com -k /etc/krb5.keytab", "ipa service-show nfs/nfs-server.example.com Principal name: nfs/[email protected] Principal alias: nfs/[email protected] Keytab: True Managed by: nfs-server.example.com", "yum install nfs-utils", "[root@nfs-server ~] ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/sysconfig/nfs Configured /etc/idmapd.conf Started rpcidmapd Started rpcgssd Restarting sssd, waiting for it to become available. Started autofs", "systemctl enable nfs-idmapd", "/export *( rw ,sec=krb5:krb5i:krb5p) /home *( rw ,sec=krb5:krb5i:krb5p)", "exportfs -rav", "allow_weak_crypto = true", "yum install nfs-utils", "kinit admin", "[root@nfs-client ~] ipa-client-automount Searching for IPA server IPA server: DNS discovery Location: default Continue to configure the system with these values? [no]: yes Configured /etc/sysconfig/nfs Configured /etc/idmapd.conf Started rpcidmapd Started rpcgssd Restarting sssd, waiting for it to become available. Started autofs", "systemctl enable rpc-gssd.service systemctl enable rpcbind.service", "nfs-server.example.com:/export /mnt nfs4 sec=krb5p,rw nfs-server.example.com:/home /home nfs4 sec=krb5p,rw", "mkdir -p /mnt/ mkdir -p /home", "mount /mnt/ mount /home", "[domain/EXAMPLE.COM] krb5_renewable_lifetime = 50d krb5_renew_interval = 3600", "systemctl restart sssd", "ipa automountlocation-add location", "ipa automountlocation-add raleigh ---------------------------------- Added automount location \"raleigh\" ---------------------------------- Location: raleigh", "ipa automountlocation-tofiles raleigh /etc/auto.master: /- /etc/auto.direct --------------------------- /etc/auto.direct:", "--------------------------- /etc/auto.direct: /shared/man server.example.com:/shared/man", "ipa automountkey-add raleigh auto.direct --key=/share --info=\"ro,soft,ipaserver.example.com:/home/share\" Key: /share Mount information: ro,soft,ipaserver.example.com:/home/share", "ldapclient -a serviceSearchDescriptor=auto_direct:automountMapName=auto.direct,cn= location ,cn=automount,dc=example,dc=com?one", "--------------------------- /etc/auto.share: man ipa.example.com:/docs/man ---------------------------", "ipa automountmap-add-indirect location mapName --mount= directory [--parentmap= mapName ]", "ipa automountmap-add-indirect raleigh auto.share --mount=/share -------------------------------- Added automount map \"auto.share\" --------------------------------", "ipa automountkey-add raleigh auto.share --key=docs --info=\"ipa.example.com:/export/docs\" ------------------------- Added automount key \"docs\" ------------------------- Key: docs Mount information: ipa.example.com:/export/docs", "ipa automountlocation-tofiles raleigh /etc/auto.master: /- /etc/auto.direct /share /etc/auto.share --------------------------- /etc/auto.direct: --------------------------- /etc/auto.share: man ipa.example.com:/export/docs", "ldapclient -a serviceSearchDescriptor=auto_share:automountMapName=auto.share,cn= location ,cn=automount,dc=example,dc=com?one", "ipa automountlocation-import location map_file [--continuous]", "ipa automountlocation-import raleigh /etc/custom.map", "NSSProtocol TLSv1.2 NSSCipherSuite +ecdhe_ecdsa_aes_128_sha,+ecdhe_ecdsa_aes_256_sha,+ecdhe_rsa_aes_128_sha,+ecdhe_rsa_aes_256_sha,+rsa_aes_128_sha,+rsa_aes_256_sha", "sed -i 's/^NSSProtocol .*/NSSProtocol TLSv1.2/' /etc/httpd/conf.d/nss.conf sed -i 's/^NSSCipherSuite .*/NSSCipherSuite +ecdhe_ecdsa_aes_128_sha,+ecdhe_ecdsa_aes_256_sha,+ecdhe_rsa_aes_128_sha,+ecdhe_rsa_aes_256_sha,+rsa_aes_128_sha,+rsa_aes_256_sha/' /etc/httpd/conf.d/nss.conf", "systemctl restart httpd", "ldapmodify -h localhost -p 389 -D 'cn=directory manager' -W << EOF dn: cn=encryption,cn=config changeType: modify replace: sslVersionMin sslVersionMin: TLS1.2 EOF", "systemctl restart dirsrv@ EXAMPLE-COM .service", "systemctl stop dirsrv@ EXAMPLE-COM .service", "sslVersionMin: TLS1.2", "systemctl start dirsrv@ EXAMPLE-COM .service", "sslVersionRangeStream=\"tls1_2:tls1_2\" sslVersionRangeDatagram=\"tls1_2:tls1_2\"", "sed -i 's/tls1_[01]:tls1_2/tls1_2:tls1_2/g' /etc/pki/pki-tomcat/server.xml", "systemctl restart [email protected]", "ldapmodify -x -D \"cn=Directory Manager\" -W -h server.example.com -p 389 -ZZ Enter LDAP Password: dn: cn=config changetype: modify replace: nsslapd-allow-anonymous-access nsslapd-allow-anonymous-access: rootdse modifying entry \"cn=config\"", "systemctl restart dirsrv.target", "ldapsearch -D \"cn=directory manager\" -w secret -b \"cn=config,cn=ldbm database,cn=plugins,cn=config\" nsslapd-dbcachesize nsslapd-db-locks nsslapd-dbcachesize: 10000000 nsslapd-db-locks: 50000", "ldapsearch -D \"cn=directory manager\" -w secret -b \"cn=userRoot,cn=ldbm database,cn=plugins,cn=config\" nsslapd-cachememsize nsslapd-dncachememsize nsslapd-cachememsize: 10485760 nsslapd-dncachememsize: 10485760", "dn: cn=config,cn=ldbm database,cn=plugins,cn=config changetype: modify replace: nsslapd-dbcachesize nsslapd-dbcachesize: db_cache_size_in_bytes", "ldapmodify -D \"cn=directory manager\" -w secret -x dn: cn=config,cn=ldbm database,cn=plugins,cn=config changetype: modify replace: nsslapd-dbcachesize nsslapd-dbcachesize: 200000000", "modifying entry \"cn=config,cn=ldbm database,cn=plugins,cn=config\"", "dn: cn=userRoot,cn=ldbm database,cn=plugins,cn=config changetype: modify replace: nsslapd-cachememsize nsslapd-cachememsize: entry_cache_size_in_bytes", "grep '^dn: ' ldif_file | sed 's/^dn: //' | wc -l 92200", "grep '^dn: ' ldif_file | sed 's/^dn: //' | wc -c 9802460", "dn: cn=userRoot,cn=ldbm database,cn=plugins,cn=config changetype: modify Replace: nsslapd-dncachememsize Nsslapd-dncachememsize: dn_cache_size", "dn: cn=MemberOf Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: off", "dn: cn=Schema Compatibility,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: off", "dn: cn=Content Synchronization,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: off", "dn: cn=Retro Changelog Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: off", "ipactl stop", "dn: cn=config,cn=ldbm database,cn=plugins,cn=config nsslapd-db-locks: db_lock_number", "systemctl start dirsrv.target", "ldapadd -D \" binddn \" -y password_file -f ldif_file", "dn: cn=MemberOf Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: on", "systemctl restart dirsrv.target", "fixup-memberof.pl -D \"cn=directory manager\" -j password_file -Z server_id -b \" suffix \" -f \"(objectClass=*)\" -P LDAP", "dn: cn=Schema Compatibility,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: on", "dn: cn=Content Synchronization,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: on", "dn: cn=Retro Changelog Plugin,cn=plugins,cn=config changetype: modify replace: nsslapd-pluginEnabled nsslapd-pluginEnabled: on", "dn: cn=config,cn=ldbm database,cn=plugins,cn=config changetype: modify replace: nsslapd-dbcachesize nsslapd-dbcachesize: backup_db_cache_size dn: cn=userRoot,cn=ldbm database,cn=plugins,cn=config changetype: modify Replace: nsslapd-dncachememsize Nsslapd-dncachememsize: backup_dn_cache_size - replace: nsslapd-cachememsize nsslapd-cachememsize: backup_entry_cache_size", "systemctl stop dirsrv.target", "dn: cn=config,cn=ldbm database,cn=plugins,cn=config nsslapd-db-locks: backup_db_lock_number", "ipactl start", "https://ipaserver.example.com/ipa/migration", "[jsmith@server ~]USD kinit Password for [email protected]: Password expired. You must change it now. Enter new password: Enter it again:", "ldapmodify -x -D 'cn=directory manager' -w password -h ipaserver.example.com -p 389 dn: cn=config changetype: modify replace: nsslapd-sasl-max-buffer-size nsslapd-sasl-max-buffer-size: 4194304 modifying entry \"cn=config\"", "ulimit -u 4096", "ipa migrate-ds ldap://ldap.example.com:389", "ipa migrate-ds --user-container=ou=employees --group-container=\"ou=employee groups\" ldap://ldap.example.com:389", "ipa migrate-ds --user-objectclass=fullTimeEmployee ldap://ldap.example.com:389", "ipa migrate-ds --group-objectclass=groupOfNames --group-objectclass=groupOfUniqueNames ldap://ldap.example.com:389", "ipa migrate-ds --exclude-groups=\"Golfers Group\" --exclude-users=jsmith --exclude-users=bjensen ldap://ldap.example.com:389", "ipa migrate-ds --user-objectclass=fullTimeEmployee --exclude-users=jsmith --exclude-users=bjensen --exclude-users=mreynolds ldap://ldap.example.com:389", "ipa migrate-ds --user-ignore-attribute=userCertificate --user-ignore-objectclass=strongAuthenticationUser --group-ignore-objectclass=groupOfCertificates ldap://ldap.example.com:389", "ipa migrate-ds --schema=RFC2307 ldap://ldap.example.com:389", "ipa user-add TEST_USER", "ipa user-show --all TEST_USER", "ipa-compat-manage disable", "systemctl restart dirsrv.target", "ipa config-mod --enable-migration=TRUE", "ipa migrate-ds ldap://ldap.example.com:389", "ipa-compat-manage enable", "systemctl restart dirsrv.target", "ipa config-mod --enable-migration=FALSE", "ipa-client-install --enable-dns-update", "https:// ipaserver.example.com /ipa/migration", "[user@server ~]USD ldapsearch -LL -x -D 'cn=Directory Manager' -w secret -b 'cn=users,cn=accounts,dc=example,dc=com' '(&(!(krbprincipalkey=*))(userpassword=*))' uid", "ipa migrate-ds --ca-cert-file= /etc/ipa/remote.crt ldaps:// ldap.example.com :636", "KRB5_TRACE=/dev/stdout ipa cert-find", "systemctl restart httpd.service", "KRB5_TRACE=/dev/stdout kinit admin", "host client_fully_qualified_domain_name", "host server_fully_qualified_domain_name", "host server_IP_address", "server.example.com.", "systemctl status krb5kdc # systemctl status dirsrv.target", "ipactl status Directory Service: RUNNING krb5kdc Service: RUNNING kadmin Service: RUNNING named Service: RUNNING httpd Service: RUNNING ipa-custodia Service: RUNNING ntpd Service: RUNNING pki-tomcatd Service: RUNNING ipa-otpd Service: RUNNING ipa-dnskeysyncd Service: RUNNING ipa: INFO: The ipactl command was successful", "dig -t TXT _kerberos. ipa.example.com USD dig -t SRV _kerberos._udp. ipa.example.com USD dig -t SRV _kerberos._tcp. ipa.example.com", "; <<>> DiG 9.11.0-P2-RedHat-9.11.0-6.P2.fc25 <<>> -t SRV _kerberos._tcp.ipa.server.example ;; global options: +cmd ;; connection timed out; no servers could be reached", "systemctl status httpd.service # systemctl status dirsrv@ IPA-EXAMPLE-COM .service", "systemctl restart httpd", "klist -kt /etc/dirsrv/ds.keytab Keytab name: FILE:/etc/dirsrv/ds.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 01/10/2017 14:54:39 ldap/[email protected] 2 01/10/2017 14:54:39 ldap/[email protected] [... output truncated ...]", "kinit admin USD kvno ldap/ [email protected]", "getcert list Number of certificates and requests being tracked: 8. [... output truncated ...] Request ID '20170421124617': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/dirsrv/slapd-IPA-EXAMPLE-COM',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/dirsrv/slapd-IPA-EXAMPLE-COM/pwdfile.txt' certificate: type=NSSDB,location='/etc/dirsrv/slapd-IPA-EXAMPLE-COM',nickname='Server-Cert',token='NSS Certificate DB' CA: IPA issuer: CN=Certificate Authority,O=IPA.EXAMPLE.COM subject: CN=ipa.example.com,O=IPA.EXAMPLE.COM expires: 2019-04-22 12:46:17 UTC [... output truncated ...] Request ID '20170421130535': status: MONITORING stuck: no key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt' certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB' CA: IPA issuer: CN=Certificate Authority,O=IPA.EXAMPLE.COM subject: CN=ipa.example.com,O=IPA.EXAMPLE.COM expires: 2019-04-22 13:05:35 UTC [... output truncated ...]", "dig _ldap._tcp.ipa.example.com. SRV ; <<>> DiG 9.9.4-RedHat-9.9.4-48.el7 <<>> _ldap._tcp.ipa.example.com. SRV ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17851 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 1, ADDITIONAL: 5 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;_ldap._tcp.ipa.example.com. IN SRV ;; ANSWER SECTION: _ldap._tcp.ipa.example.com. 86400 IN SRV 0 100 389 ipaserver.ipa.example.com. ;; AUTHORITY SECTION: ipa.example.com. 86400 IN NS ipaserver.ipa.example.com. ;; ADDITIONAL SECTION: ipaserver.ipa.example.com. 86400 IN A 192.0.21 ipaserver.ipa.example.com 86400 IN AAAA 2001:db8::1", "host server.ipa.example.com server.ipa.example.com. 86400 IN A 192.0.21 server.ipa.example.com 86400 IN AAAA 2001:db8::1", "ipa dnszone-show zone_name USD ipa dnsrecord-show zone_name record_name_in_the_zone", "systemctl restart named-pkcs11", "ipa dns-update-system-records --dry-run", "dig +short server2.example.com A dig +short server2.example.com AAAA dig +short -x server2_IPv4_or_IPv6_address", "dig +short server1.example.com A dig +short server1.example.com AAAA dig +short -x server1_IPv4_or_IPv6_address", "kinit -kt /etc/dirsrv/ds.keytab ldap/ server1.example.com klist ldapsearch -Y GSSAPI -h server1.example.com -b \"\" -s base ldapsearch -Y GSSAPI -h server2_FQDN . -b \"\" -s base", "ipa : CRITICAL failed to configure ca instance Command '/usr/sbin/pkispawn -s CA -f /tmp/ configuration_file ' returned non-zero exit status 1 Configuration of CA failed", "env|grep proxy http_proxy=http://example.com:8080 ftp_proxy=http://example.com:8080 https_proxy=http://example.com:8080", "for i in ftp http https; do unset USD{i}_proxy; done", "pkidestroy -s CA -i pki-tomcat; rm -rf /var/log/pki/pki-tomcat /etc/sysconfig/pki-tomcat /etc/sysconfig/pki/tomcat/pki-tomcat /var/lib/pki/pki-tomcat /etc/pki/pki-tomcat /root/ipa.csr", "ipa-server-install --uninstall", "ipaserver named[6886]: failed to dynamically load driver 'ldap.so': libldap-2.4.so.2: cannot open shared object file: No such file or directory", "yum remove bind-chroot", "ipactl restart", "CRITICAL Failed to restart the directory server Command '/bin/systemctl restart [email protected]' returned non-zero exit status 1", "slapd_ldap_sasl_interactive_bind - Error: could not perform interactive bind for id [] mech [GSSAPI]: error -2 (Local error) (SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Credentials cache file '/tmp/krb5cc_496' not found))", "set_krb5_creds - Could not get initial credentials for principal [ldap/ replica1.example.com] in keytab [WRFILE:/etc/dirsrv/ds.keytab]: -1765328324 (Generic error)", "Replication bind with GSSAPI auth resumed", "ipa: DEBUG: approved_usage = SSLServer intended_usage = SSLServer ipa: DEBUG: cert valid True for \"CN=replica.example.com,O=EXAMPLE.COM\" ipa: DEBUG: handshake complete, peer = 192.0.2.2:9444 Certificate operation cannot be completed: Unable to communicate with CMS (Not Found) ipa: DEBUG: Created connection context.ldap2_21534032 ipa: DEBUG: Destroyed connection context.ldap2_21534032 The DNS forward record replica.example.com. does not match the reverse address replica.example.org", "Certificate operation cannot be completed: EXCEPTION (Certificate serial number 0x2d not found)", "ipa-replica-manage list-ruv server1.example.com:389: 6 server2.example.com:389: 5 server3.example.com:389: 4 server4.example.com:389: 12", "ipa-replica-manage clean-ruv 6 ipa-replica-manage clean-ruv 5 ipa-replica-manage clean-ruv 4 ipa-replica-manage clean-ruv 12", "dn: cn=clean replica_ID , cn=cleanallruv, cn=tasks, cn=config objectclass: extensibleObject replica-base-dn: dc= example ,dc= com replica-id: replica_ID replica-force-cleaning: no cn: clean replica_ID", "ldapsearch -p 389 -h IdM_node -D \"cn=directory manager\" -W -b \"cn=config\" \"(objectclass=nsds5replica)\" nsDS5ReplicaId", "Jun 30 11:11:48 server1 krb5kdc[1279](info): AS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: NEEDED_PREAUTH: admin EXAMPLE COM for krbtgt/EXAMPLE COM EXAMPLE COM, Additional pre-authentication required Jun 30 11:11:48 server1 krb5kdc[1279](info): AS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: ISSUE: authtime 1309425108, etypes {rep=18 tkt=18 ses=18}, admin EXAMPLE COM for krbtgt/EXAMPLE COM EXAMPLE COM Jun 30 11:11:49 server1 krb5kdc[1279](info): TGS_REQ (4 etypes {18 17 16 23}) 192.0.2.1: UNKNOWN_SERVER: authtime 0, admin EXAMPLE COM for HTTP/[email protected], Server not found in Kerberos database", "debug_level = 9", "systemctl start sssd", "ipa: ERROR: Kerberos error: ('Unspecified GSS failure. Minor code may provide more information', 851968)/('Decrypt integrity check failed', -1765328353)", "Wed Jun 14 18:24:03 2017) [sssd[pam]] [child_handler_setup] (0x2000): Setting up signal handler up for pid [12370] (Wed Jun 14 18:24:03 2017) [sssd[pam]] [child_handler_setup] (0x2000): Signal handler set up for pid [12370] (Wed Jun 14 18:24:08 2017) [sssd[pam]] [pam_initgr_cache_remove] (0x2000): [idmeng] removed from PAM initgroup cache (Wed Jun 14 18:24:13 2017) [sssd[pam]] [p11_child_timeout] (0x0020): Timeout reached for p11_child. (Wed Jun 14 18:24:13 2017) [sssd[pam]] [pam_forwarder_cert_cb] (0x0040): get_cert request failed. (Wed Jun 14 18:24:13 2017) [sssd[pam]] [pam_reply] (0x0200): pam_reply called with result [4]: System error.", "certificate_verification = ocsp_default_responder= http://ocsp.proxy.url , ocsp_default_responder_signing_cert= nickname", "systemctl restart sssd.service", "ipa: ERROR: Insufficient access: Insufficient 'add' privilege to add the entry 'cn=testvault,cn=user,cn=users,cn=vaults,cn=kra,dc=example,dc=com'.", "kinit admin", "ipa vaultcontainer-add-owner --user= user --users= user Owner users: admin, user Vault user: user ------------------------ Number of owners added 1 ------------------------", "kinit user ipa vault-add testvault2 ------------------------ Added vault \"testvault2\" ------------------------", "/var/log/httpd/*log { missingok notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript }", "ipa-replica-prepare replica.example.com --ip-address 192.0.2.2 Directory Manager (existing master) password: Do you want to configure the reverse zone? [yes]: no Preparing replica for replica.example.com from server.example.com Creating SSL certificate for the Directory Server Creating SSL certificate for the dogtag Directory Server Saving dogtag Directory Server port Creating SSL certificate for the Web Server Exporting RA certificate Copying additional files Finalizing configuration Packaging replica information into /var/lib/ipa/replica-info-replica.example.com.gpg Adding DNS records for replica.example.com Waiting for replica.example.com. A or AAAA record to be resolvable This can be safely interrupted (Ctrl+C) The ipa-replica-prepare command was successful", "yum install ipa-server", "scp /var/lib/ipa/replica-info-replica.example.com.gpg root@ replica :/var/lib/ipa/", "ipa-replica-install /var/lib/ipa/replica-info-replica.example.com.gpg Directory Manager (existing master) password: Run connection check to master Check connection from replica to remote master 'server.example.com': Connection from replica to master is OK. Start listening on required ports for remote master check Get credentials to log in to remote master [email protected] password: Check SSH connection to remote master Connection from master to replica is OK. Configuring NTP daemon (ntpd) [1/4]: stopping ntpd [2/4]: writing configuration Restarting Directory server to apply updates [1/2]: stopping directory server [2/2]: starting directory server Done. Restarting the directory server Restarting the KDC Restarting the web server", "ipa-replica-install /var/lib/ipa/ replica-info-replica.example.com.gpg --setup-dns --forwarder 198.51.100.0", "ipa-replica-install /var/lib/ipa/ replica-info-replica.example.com.gpg --setup-ca", "ipa-replica-prepare replica.example.com --dirsrv-cert-file /tmp/server.key --dirsrv-pin secret --http-cert-file /tmp/server.crt --http-cert-file /tmp/server.key --http-pin secret --dirsrv-cert-file /tmp/server.crt", "ipa-replica-manage list server1.example.com : master server2.example.com: master server3.example.com: master server4.example.com: master", "ipa-replica-manage list server1.example.com server2.example.com: replica server3.example.com: replica", "ipa-replica-manage connect server1.example.com server2.example.com", "ipa-replica-manage disconnect server1.example.com server4.example.com", "ipa-replica-manage del server2.example.com", "ipa-replica-manage force-sync --from server1.example.com", "ipa-replica-manage re-initialize --from server1.example.com", "ipa-replica-manage list server1.example.com: master server2.example.com: master server3.example.com: master server4.example.com: master", "ipa-replica-manage del server3.example.com", "ipa-csreplica-manage del server3.example.com", "ipa-server-install --uninstall -U", "ipa config-show | grep \"CA renewal master\" IPA CA renewal master: server.example.com", "ldapsearch -H ldap://USDHOSTNAME -D 'cn=Directory Manager' -W -b 'cn=masters,cn=ipa,cn=etc,dc=example,dc=com' '(&(cn=CA)(ipaConfigString=caRenewalMaster))' dn CA, server.example.com, masters, ipa, etc, example.com dn: cn=CA,cn= server.example.com ,cn=masters,cn=ipa,cn=etc,dc=example,dc=com", "ipa config-mod --ca-renewal-master-server new_server.example.com", "ipa-csreplica-manage set-renewal-master" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html-single/linux_domain_identity_authentication_and_policy_guide/index
Chapter 79. Kubernetes Pods
Chapter 79. Kubernetes Pods Since Camel 2.17 Both producer and consumer are supported The Kubernetes Pods component is one of the Kubernetes Components which provides a producer to execute Kubernetes Pods operations and a consumer to consume events related to Pod Objects. 79.1. Dependencies When using kubernetes-pods with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 79.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 79.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 79.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 79.3. Component Options The Kubernetes Pods component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 79.4. Endpoint Options The Kubernetes Pods endpoint is configured using URI syntax: with the following path and query parameters: 79.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 79.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 79.5. Message Headers The Kubernetes Pods component supports 7 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesPodsLabels (producer) Constant: KUBERNETES_PODS_LABELS The pod labels. Map CamelKubernetesPodName (producer) Constant: KUBERNETES_POD_NAME The pod name. String CamelKubernetesPodSpec (producer) Constant: KUBERNETES_POD_SPEC The spec for a pod. PodSpec CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 79.6. Supported producer operation listPods listPodsByLabels getPod createPod updatePod deletePod 79.7. Kubernetes Pods Producer Examples listPods: this operation list the pods on a kubernetes cluster. from("direct:list"). toF("kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPods"). to("mock:result"); This operation returns a List of Pods from your cluster. listPodsByLabels: this operation list the pods by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PODS_LABELS, labels); } }); toF("kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPodsByLabels"). to("mock:result"); This operation returns a List of Pods from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 79.8. Kubernetes Pods Consumer Example fromF("kubernetes-pods://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Pod pod = exchange.getIn().getBody(Pod.class); log.info("Got event with configmap name: " + pod.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events on the namespace default for the pod test. 79.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-pods:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPods\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PODS_LABELS, labels); } }); toF(\"kubernetes-pods:///?kubernetesClient=#kubernetesClient&operation=listPodsByLabels\"). to(\"mock:result\");", "fromF(\"kubernetes-pods://%s?oauthToken=%s&namespace=default&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Pod pod = exchange.getIn().getBody(Pod.class); log.info(\"Got event with configmap name: \" + pod.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-pods-component-starter
3.3. Basic SystemTap Handler Constructs
3.3. Basic SystemTap Handler Constructs SystemTap supports the use of several basic constructs in handlers. The syntax for most of these handler constructs are mostly based on C and awk syntax. This section describes several of the most useful SystemTap handler constructs, which should provide you with enough information to write simple yet useful SystemTap scripts. 3.3.1. Variables Variables can be used freely throughout a handler; simply choose a name, assign a value from a function or expression to it, and use it in an expression. SystemTap automatically identifies whether a variable should be typed as a string or integer, based on the type of the values assigned to it. For instance, if you set the variable var to gettimeofday_s() (as in var = gettimeofday_s() ), then var is typed as a number and can be printed in a printf() with the integer format specifier ( %d ). Note, however, that by default variables are only local to the probe they are used in. This means that variables are initialized, used and disposed at each probe handler invocation. To share a variable between probes, declare the variable name using global outside of the probes. Consider the following example: Example 3.8. timer-jiffies.stp global count_jiffies, count_ms probe timer.jiffies(100) { count_jiffies ++ } probe timer.ms(100) { count_ms ++ } probe timer.ms(12345) { hz=(1000*count_jiffies) / count_ms printf ("jiffies:ms ratio %d:%d => CONFIG_HZ=%d\n", count_jiffies, count_ms, hz) exit () } Example 3.8, "timer-jiffies.stp" computes the CONFIG_HZ setting of the kernel using timers that count jiffies and milliseconds, then computing accordingly. The global statement allows the script to use the variables count_jiffies and count_ms (set in their own respective probes) to be shared with probe timer.ms(12345) . Note The ++ notation in Example 3.8, "timer-jiffies.stp" ( count_jiffies ++ and count_ms ++ ) is used to increment the value of a variable by 1. In the following probe, count_jiffies is incremented by 1 every 100 jiffies: probe timer.jiffies(100) { count_jiffies ++ } In this instance, SystemTap understands that count_jiffies is an integer. Because no initial value was assigned to count_jiffies , its initial value is zero by default. 3.3.2. Conditional Statements In some cases, the output of a SystemTap script may be too big. To address this, you need to further refine the script's logic in order to delimit the output into something more relevant or useful to your probe. You can do this by using conditionals in handlers. SystemTap accepts the following types of conditional statements: If/Else Statements Format: if ( condition ) statement1 else statement2 The statement1 is executed if the condition expression is non-zero. The statement2 is executed if the condition expression is zero. The else clause ( else statement2 ) is optional. Both statement1 and statement2 can be statement blocks. Example 3.9. ifelse.stp global countread, countnonread probe kernel.function("vfs_read"),kernel.function("vfs_write") { if (probefunc()=="vfs_read") countread ++ else countnonread ++ } probe timer.s(5) { exit() } probe end { printf("VFS reads total %d\n VFS writes total %d\n", countread, countnonread) } Example 3.9, "ifelse.stp" is a script that counts how many virtual file system reads ( vfs_read ) and writes ( vfs_write ) the system performs within a 5-second span. When run, the script increments the value of the variable countread by 1 if the name of the function it probed matches vfs_read (as noted by the condition if (probefunc()=="vfs_read") ); otherwise, it increments countnonread ( else {countnonread ++} ). While Loops Format: while ( condition ) statement So long as condition is non-zero the block of statements in statement are executed. The statement is often a statement block and it must change a value so condition will eventually be zero. For Loops Format: for ( initialization ; conditional ; increment ) statement The for loop is simply shorthand for a while loop. The following is the equivalent while loop: initialization while ( conditional ) { statement increment } Conditional Operators Aside from == (is equal to), you can also use the following operators in your conditional statements: >= Greater than or equal to <= Less than or equal to != Is not equal to 3.3.3. Command-Line Arguments You can also allow a SystemTap script to accept simple command-line arguments using a USD or @ immediately followed by the number of the argument on the command line. Use USD if you are expecting the user to enter an integer as a command-line argument, and @ if you are expecting a string. Example 3.10. commandlineargs.stp probe kernel.function(@1) { } probe kernel.function(@1).return { } Example 3.10, "commandlineargs.stp" is similar to Example 3.1, "wildcards.stp" , except that it allows you to pass the kernel function to be probed as a command-line argument (as in stap commandlineargs.stp kernel function ). You can also specify the script to accept multiple command-line arguments, noting them as @1 , @2 , and so on, in the order they are entered by the user.
[ "global count_jiffies, count_ms probe timer.jiffies(100) { count_jiffies ++ } probe timer.ms(100) { count_ms ++ } probe timer.ms(12345) { hz=(1000*count_jiffies) / count_ms printf (\"jiffies:ms ratio %d:%d => CONFIG_HZ=%d\\n\", count_jiffies, count_ms, hz) exit () }", "probe timer.jiffies(100) { count_jiffies ++ }", "if ( condition ) statement1 else statement2", "global countread, countnonread probe kernel.function(\"vfs_read\"),kernel.function(\"vfs_write\") { if (probefunc()==\"vfs_read\") countread ++ else countnonread ++ } probe timer.s(5) { exit() } probe end { printf(\"VFS reads total %d\\n VFS writes total %d\\n\", countread, countnonread) }", "while ( condition ) statement", "for ( initialization ; conditional ; increment ) statement", "initialization while ( conditional ) { statement increment }", "probe kernel.function(@1) { } probe kernel.function(@1).return { }" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_beginners_guide/scriptconstructions
Providing feedback on Red Hat build of Quarkus documentation
Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/getting_started_with_security/proc_providing-feedback-on-red-hat-documentation_getting-started-with-security
19.7. Managing Snapshots
19.7. Managing Snapshots Using virt-manager , it is possible to create, run, and delete guest snapshots . A snapshot is a saved image of the guest's hard disk, memory, and device state at a single point in time. After a snapshot is created, the guest can be returned to the snapshot's configuration at any time. Important Red Hat recommends the use of external snapshots, as they are more flexible and reliable when handled by other virtualization tools. However, it is currently not possible to create external snapshots in virt-manager . To create external snapshots, use the virsh snapshot-create-as command with the --diskspec vda,snapshot=external option. For more information, see Section A.13, "Workaround for Creating External Snapshots with libvirt" . To manage snapshots in virt-manager, open the snapshot management interface by clicking on the guest console. To create a new snapshot, click under the snapshot list. In the snapshot creation interface, input the name of the snapshot and, optionally, a description, and click Finish . To revert the guest to a snapshot's configuration, select the snapshot and click To remove the selected snapshot, click Warning Creating and loading snapshots while the virtual machine is running (also referred to as live snapshots ) is only supported with qcow2 disk images. For more in-depth snapshot management, use the virsh snapshot-create command. See Section 20.39, "Managing Snapshots" for details about managing snapshots with virsh .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guests_with_the_virtual_machine_manager_virt_manager-managing_snapshots
Chapter 4. Updating Red Hat OpenShift Data Foundation 4.9.x to 4.9.y
Chapter 4. Updating Red Hat OpenShift Data Foundation 4.9.x to 4.9.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Container Storage upgrades all OpenShift Container Storage services including the backend Ceph Storage cluster. For External mode deployments, upgrading OpenShift Container Storage only upgrades the OpenShift Container Storage service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Container Storage in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about Red Hat Ceph Storage releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.9.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage OpenShift Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . Verification steps Verify that the Version below the OpenShift Data Foundation name and the operator status is the latest version. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage OpenShift Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . If verification steps fail, contact Red Hat Support .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/upgrading_to_openshift_data_foundation/updating-zstream-odf_rhodf
3.7. Supported Image Customizations
3.7. Supported Image Customizations A number of image customizations are supported in blueprints. In order to make use of these options, you need to initially configure them in the blueprint and then use the command push to import the modified blueprint to Image Builder. Note These customizations are not currently supported in the accompanying `cockpit-composer` GUI. Set the image host name User specifications for the resulting system image Only the user name is required, you can leave out any other lines. Replace PASSWORD-HASH with the actual password hash. To generate the hash, use a command such as: Important To generate the hash, you must have the python3 package on your system. Use the following command to install the package: Replace PUBLIC-SSH-KEY with the actual public key. Repeat this block for every user you want to include. Group specifications for the resulting system image Repeat this block for every group you want to include. Set an existing user's ssh key Note This option is only applicable for existing users. To create a user and set an ssh key, use the User specifications for the resulting system image customization. Append a kernel boot parameter option to the defaults Set the image host name Add a group for the resulting system image Only the name is required and GID is optional. Set the timezone and the Network Time Protocol (NTP) servers for the resulting system image If you do not set a timezone, the system uses Universal Time, Coordinated (UTC) as default. Setting NTP servers is optional. Set the locale settings for the resulting system image Setting both language and keyboard options is mandatory. You can add multiple languages. The first language you add will be the primary language and the other languages will be secondary. Set the firewall for the resulting system image You can use the numeric ports, or theirs names from the `/etc/services` file to enable or disable lists. Set which services to enable during the boot time You can control which services to enable during the boot time. Some image types already have services enabled or disabled so that the image works correctly and this setup cannot be overridden.
[ "[customizations] hostname = \" baseimage \"", "[[customizations.user]] name = \" USER-NAME \" description = \" USER-DESCRIPTION \" password = \" PASSWORD-HASH \" key = \" PUBLIC-SSH-KEY \" home = /home\" /USER-NAME/ \" shell = \" /usr/bin/bash \" groups = [\"users\", \"wheel\"] uid = NUMBER gid = NUMBER", "python3 -c 'import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(\"Confirm: \")) else exit())'", "yum install python3", "[[customizations.group]] name = \" GROUP-NAME \" gid = NUMBER", "[[customizations.sshkey]] user = \" root \" key = \" PUBLIC-SSH-KEY \"", "[[customizations.kernel]] append = \" KERNEL-OPTION \"", "[customizations] hostname = \" BASE-IMAGE \"", "[[customizations.group]] name = \" USER-NAME \" gid = NUMBER", "[customizations.timezone] timezone = \" TIMEZONE \" ntpservers = NTP-SERVER", "[customizations.locale] language = \" [LANGUAGE] \" keyboard = \" KEYBOARD \"", "[customizations.firewall] port = \" [PORTS] \"", "[customizations.services] enabled = \" [SERVICES] \" disabled = \" [SERVICES] \"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-test_chapter3-test_section_7
Chapter 9. Known issues in Red Hat Process Automation Manager 7.13.1
Chapter 9. Known issues in Red Hat Process Automation Manager 7.13.1 This section lists known issues with Red Hat Process Automation Manager 7.13.1. 9.1. Business Central Unable to deploy Business Central using JDK version 11.0.16 [ RHPAM-4497 ] Issue: It is not possible to deploy Business Central if your installation uses JDK version 11.0.16. Actual result: Business Central does not deploy when launched. Expected result: Business Central deploys successfully. Workaround: Use a JDK version such as 11.0.5 or earlier. 9.2. Form Modeler Date type process variable is empty when the process is started using Business Central form with the showTime set to false [ RHPAM-4514 ] Issue: When you use the default form rendering in Business Central and the process variable field has showTime=false , the started process instance shows that the variable is empty. The affected types are java.time.LocalDateTime , java.time.LocalDate , java.time.LocalTime , and java.util.Date . Steps to reproduce: Define the process variable with a specific type. Generate a form. Open a form and set showTime=false for a specified field. Deploy the project. Open the process form. Specify the value in the process form. Check the process instance variables. The value for the specified variable is empty. Workaround: None. Form in KIE Server with a java.util.Date field does not allow the time to be inserted [ RHPAM-4513 ] Issue: When a process has a variable of type java.util.Date , the generated form, if the showTime attribute is true , does not allow inserting the time part. Then after submitting the Date variable shows all zeros in the time part of the datatype. Workaround: None. 9.3. Process Designer BPMN2 files in XML editor have a Properties panel that contains a data from other processes [ RHPAM-4468 ] Issue: If two processes are open, where one process is open in the XML editor (a legacy process with the BPMN2 extension) and one process is open in the new process designer, the properties in the Properties panel, as well as the diagram in the Explore Diagram window from the new process designer, are shown in the XML editor of the other process. The XML editor should not have any Properties or Diagram panel. Steps to reproduce: Open any new process designer process. Do not close the process. Open the legacy process, for example legacy.bpmn2 , in the XML editor. Open the Properties panel. Actual result: The Properties and Explore diagram from a new process designer process are shown in the XML editor panel. Expected result: No Properties or Explore diagram panels are present in the XML editor. Workaround: None. A custom data object in multiple variables causes an error in a case project [ RHPAM-4422 ] Issue: The custom data object in the multiple variables causes an error in a case project. You receive a UI exception with the following error: Steps to reproduce: Create a case definition in a case project. Create a custom data object in the same project. Add a procVar process variable and caseVar case file variable with the same CustomDataObject type. Save the changes. Create a multiple instance node or a Data Object on the canvas. In the multiple instance node, set MI Collection input/output and try to change the Data Input/Output type. In the Data Object on the canvas, try to change the data type. Actual result: On a Chrome browser: It is not possible to set the type with the first click. The custom type is chosen. On a Firefox browser: An unexpected error occurs. Expected result: It is possible to set the type correctly. No errors occur. Workaround: None. 9.4. Red Hat OpenShift Container Platform PostgreSQL 13 Pod won't start because of an incompatible data dirctory [ RHPAM-4464 ] Issue: When you start a PostgreSQL pod after you upgrade the operator, the pod fails to start and you receive the following message: Incompatible data directory. This container image provides PostgreSQL '13', but data directory is of version '10'. This image supports automatic data directory upgrade from '12', please carefully consult image documentation about how to use the 'USDPOSTGRESQL_UPGRADE' startup option. Workaround: Check the version of PostgreSQL: If the PostgreSQL version returned is 12.x or earlier, upgrade PostgreSQL: Red Hat Process Automation Manager version PostgreSQL version Upgrade instructions 7.13.1 7.10 Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 12.x. 7.13.2 7.10 1. Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 12.x. 2. Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 13.x. 7.13.2 7.12 Follow the instructions in Upgrading database (by switching to newer PostgreSQL image version) to upgrade to PostgreSQL 13.x. Verify that PostpreSQL has been upgraded to your required version:
[ "Uncaught exception: Exception caught: Duplicate value: CustomDataObject [com.myspace.caseproject] Caused by: Duplicate value: CustomDataObject [com.myspace.caseproject]", "postgres -V", "postgres -V" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/release_notes_for_red_hat_process_automation_manager_7.13/rn-7.13.1-known-issues-ref
Chapter 4. Identity and access management
Chapter 4. Identity and access management The Identity service (keystone) provides authentication and authorization for cloud users in a Red Hat OpenStack Platform environment. You can use the Identity service for direct end-user authentication, or configure it to use external authentication methods to meet your security requirements or to match your current authentication infrastructure. 4.1. Red Hat OpenStack Platform fernet tokens Fernet is the default token provider that replaces the UUID token provider. Each fernet token remains valid for up to an hour, by default. This allows a user to perform a series of tasks without needing to reauthenticate. After you authenticate, the Identity service (keystone): Issues an encrypted bearer token known as a fernet token. This token represents your identity. Authorizes you you to perform operations based on your role. Additional resources Using Fernet keys for encryption in the overcloud 4.2. OpenStack Identity service entities The Red Hat OpenStack Identity service (keystone) recognizes the following entities: Users OpenStack Identity service (keystone) users are the atomic unit of authentication. A user must be assigned a role on a project in order to authenticate. Groups OpenStack Identity service groups are a logical grouping of users. A group can be provided access to projects under specific roles. Managing groups instead of users can simplify the management of roles. Roles OpenStack Identity service roles define the OpenStack APIs that are accessible to users or groups who are assigned those roles. Projects OpenStack Identity service projects are isolated groups of users who have common access to a shared quota of physical resources and the virtual infrastructure built from those physical resources. Domains OpenStack Identity service domains are high-level security boundaries for projects, users, and groups. You can use OpenStack Identity domains to centrally manage all keystone-based identity components. Red Hat OpenStack Platform supports multiple domains. You can represent users of different domains by using separate authentication backends. 4.3. Authenticating with keystone You can adjust the authentication security requirements required by OpenStack Identity service (keystone). Table 4.1. Identity service authentication parameters Parameter Description KeystoneLockoutDuration The number of seconds a user account is locked when the maximum number of failed authentication attempts (as specified by KeystoneLockoutFailureAttempts ) is exceeded. KeystoneLockoutFailureAttempts The maximum number of times that a user can fail to authenticate before the user account is locked for the number of seconds specified by KeystoneLockoutDuration . KeystoneMinimumPasswordAge The number of days that a password must be used before the user can change it. This prevents users from changing their passwords immediately in order to wipe out their password history and reuse an old password. KeystoneUniqueLastPasswordCount This controls the number of user password iterations to keep in history, in order to enforce that newly created passwords are unique. Additional resources Identity (keystone) parameters. 4.4. Using Identity service heat parameters to stop invalid login attempts Repetitive failed login attempts can be a sign of an attempted brute-force attack. You can use the Identity Service to limit access to accounts after repeated unsuccessful login attempts. Prerequisites You have an installed Red Hat OpenStack Platform director environment. You are logged into the director as stack. Procedure To configure the maximum number of times that a user can fail to authenticate before the user account is locked, set the value of the KeystoneLockoutFailureAttempts and KeystoneLockoutDuration heat parameters in an environment file. In the following example, the KeystoneLockoutDuration is set to one hour: Include the environment file in your deploy script. When you run your deploy script on a previously deployed environment, it is updated with the additional parameters: 4.5. Authenticating with external identity providers You can use an external identity provider (IdP) to authenticate to OpenStack service providers (SP). SPs are the services provided by an OpenStack cloud. When you use a separate IdP, external authentication credentials are separate from the databases used by other OpenStack services. This separation reduces the risk of a compromise of stored credentials. Each external IdP has a one-to-one mapping to an OpenStack Identity service (keystone) domain. You can have multiple coexisting domains with Red Hat OpenStack Platform. External authentication provides a way to use existing credentials to access resources in Red Hat OpenStack Platform without creating additional identities. The credential is maintained by the user's IdP. You can use IdPs such as Red Hat Identity Management (IdM), and Microsoft Active Directory Domain Services (AD DS) for identity management. In this configuration, the OpenStack Identity service has read-only access to the LDAP user database. The management of API access based on user or group role is performed by keystone. Roles are assigned to the LDAP accounts by using the OpenStack Identity service. 4.5.1. How LDAP integration works In the diagram below, keystone uses an encrypted LDAPS connection to connect to an Active Directory Domain Controller. When a user logs in to horizon, keystone receives the supplied user credentials and passes them to Active Directory. Additional resources Integrating OpenStack Identity (keystone) with Active Directory Integrating OpenStack Identity (keystone) with Red Hat Identity Manager (IdM) Configuring director to use domain specific LDAP backends
[ "parameter_defaults KeystoneLockoutDuration: 3600 KeystoneLockoutFailureAttempts: 3", "openstack overcloud deploy --templates -e keystone_config.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/assembly_identity-and-access-management_security_and_hardening
Chapter 72. sfc
Chapter 72. sfc This chapter describes the commands under the sfc command. 72.1. sfc flow classifier create Create a flow classifier Usage: Table 72.1. Positional arguments Value Summary <name> Name of the flow classifier Table 72.2. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description for the flow classifier --protocol <protocol> Ip protocol name. protocol name should be as per iana standard. --ethertype {IPv4,IPv6} L2 ethertype, default is ipv4 --source-port <min-port>:<max-port> Source protocol port (allowed range [1,65535]. must be specified as a:b, where a=min-port and b=max-port) in the allowed range. --destination-port <min-port>:<max-port> Destination protocol port (allowed range [1,65535]. Must be specified as a:b, where a=min-port and b=max- port) in the allowed range. --source-ip-prefix <source-ip-prefix> Source ip address in cidr notation --destination-ip-prefix <destination-ip-prefix> Destination ip address in cidr notation --logical-source-port <logical-source-port> Neutron source port (name or id) --logical-destination-port <logical-destination-port> Neutron destination port (name or id) --l7-parameters L7_PARAMETERS Dictionary of l7 parameters. currently, no value is supported for this option. Table 72.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 72.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.2. sfc flow classifier delete Delete a given flow classifier Usage: Table 72.7. Positional arguments Value Summary <flow-classifier> Flow classifier to delete (name or id) Table 72.8. Command arguments Value Summary -h, --help Show this help message and exit 72.3. sfc flow classifier list List flow classifiers Usage: Table 72.9. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 72.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 72.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.4. sfc flow classifier set Set flow classifier properties Usage: Table 72.14. Positional arguments Value Summary <flow-classifier> Flow classifier to modify (name or id) Table 72.15. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of the flow classifier --description <description> Description for the flow classifier 72.5. sfc flow classifier show Display flow classifier details Usage: Table 72.16. Positional arguments Value Summary <flow-classifier> Flow classifier to display (name or id) Table 72.17. Command arguments Value Summary -h, --help Show this help message and exit Table 72.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 72.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.6. sfc port chain create Create a port chain Usage: Table 72.22. Positional arguments Value Summary <name> Name of the port chain Table 72.23. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description for the port chain --flow-classifier <flow-classifier> Add flow classifier (name or id). this option can be repeated. --chain-parameters correlation=<correlation-type>,symmetric=<boolean> Dictionary of chain parameters. supports correlation=(mpls|nsh) (default is mpls) and symmetric=(true|false). --port-pair-group <port-pair-group> Add port pair group (name or id). this option can be repeated. Table 72.24. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 72.25. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.26. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.7. sfc port chain delete Delete a given port chain Usage: Table 72.28. Positional arguments Value Summary <port-chain> Port chain to delete (name or id) Table 72.29. Command arguments Value Summary -h, --help Show this help message and exit 72.8. sfc port chain list List port chains Usage: Table 72.30. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 72.31. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 72.32. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.33. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.34. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.9. sfc port chain set Set port chain properties Usage: Table 72.35. Positional arguments Value Summary <port-chain> Port chain to modify (name or id) Table 72.36. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of the port chain --description <description> Description for the port chain --flow-classifier <flow-classifier> Add flow classifier (name or id). this option can be repeated. --no-flow-classifier Remove associated flow classifiers from the port chain --port-pair-group <port-pair-group> Add port pair group (name or id). current port pair groups order is kept, the added port pair group will be placed at the end of the port chain. This option can be repeated. --no-port-pair-group Remove associated port pair groups from the port chain. At least one --port-pair-group must be specified together. 72.10. sfc port chain show Display port chain details Usage: Table 72.37. Positional arguments Value Summary <port-chain> Port chain to display (name or id) Table 72.38. Command arguments Value Summary -h, --help Show this help message and exit Table 72.39. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 72.40. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.41. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.42. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.11. sfc port chain unset Unset port chain properties Usage: Table 72.43. Positional arguments Value Summary <port-chain> Port chain to unset (name or id) Table 72.44. Command arguments Value Summary -h, --help Show this help message and exit --flow-classifier <flow-classifier> Remove flow classifier(s) from the port chain (name or ID). This option can be repeated. --all-flow-classifier Remove all flow classifiers from the port chain --port-pair-group <port-pair-group> Remove port pair group(s) from the port chain (name or ID). This option can be repeated. 72.12. sfc port pair create Create a port pair Usage: Table 72.45. Positional arguments Value Summary <name> Name of the port pair Table 72.46. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description for the port pair --service-function-parameters correlation=<correlation-type>,weight=<weight> Dictionary of service function parameters. currently, correlation=(None|mpls|nsh) and weight are supported. Weight is an integer that influences the selection of a port pair within a port pair group for a flow. The higher the weight, the more flows will hash to the port pair. The default weight is 1. --ingress <ingress> Ingress neutron port (name or id) --egress <egress> Egress neutron port (name or id) Table 72.47. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 72.48. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.49. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.50. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.13. sfc port pair delete Delete a given port pair Usage: Table 72.51. Positional arguments Value Summary <port-pair> Port pair to delete (name or id) Table 72.52. Command arguments Value Summary -h, --help Show this help message and exit 72.14. sfc port pair group create Create a port pair group Usage: Table 72.53. Positional arguments Value Summary <name> Name of the port pair group Table 72.54. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Description for the port pair group --port-pair <port-pair> Port pair (name or id). this option can be repeated. --enable-tap Port pairs of this port pair group are deployed as passive tap service function --disable-tap Port pairs of this port pair group are deployed as l3 service function (default) --port-pair-group-parameters lb-fields=<lb-fields> Dictionary of port pair group parameters. currently only one parameter lb-fields is supported. <lb-fields> is a & separated list of load-balancing fields. Table 72.55. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 72.56. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.57. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.58. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.15. sfc port pair group delete Delete a given port pair group Usage: Table 72.59. Positional arguments Value Summary <port-pair-group> Port pair group to delete (name or id) Table 72.60. Command arguments Value Summary -h, --help Show this help message and exit 72.16. sfc port pair group list List port pair group Usage: Table 72.61. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 72.62. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 72.63. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.64. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.65. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.17. sfc port pair group set Set port pair group properties Usage: Table 72.66. Positional arguments Value Summary <port-pair-group> Port pair group to modify (name or id) Table 72.67. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of the port pair group --description <description> Description for the port pair group --port-pair <port-pair> Port pair (name or id). this option can be repeated. --no-port-pair Remove all port pair from port pair group 72.18. sfc port pair group show Display port pair group details Usage: Table 72.68. Positional arguments Value Summary <port-pair-group> Port pair group to display (name or id) Table 72.69. Command arguments Value Summary -h, --help Show this help message and exit Table 72.70. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 72.71. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.72. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.73. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.19. sfc port pair group unset Unset port pairs from port pair group Usage: Table 72.74. Positional arguments Value Summary <port-pair-group> Port pair group to unset (name or id) Table 72.75. Command arguments Value Summary -h, --help Show this help message and exit --port-pair <port-pair> Remove port pair(s) from the port pair group (name or ID). This option can be repeated. --all-port-pair Remove all port pairs from the port pair group 72.20. sfc port pair list List port pairs Usage: Table 72.76. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 72.77. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 72.78. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.79. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.80. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.21. sfc port pair set Set port pair properties Usage: Table 72.81. Positional arguments Value Summary <port-pair> Port pair to modify (name or id) Table 72.82. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of the port pair --description <description> Description for the port pair 72.22. sfc port pair show Display port pair details Usage: Table 72.83. Positional arguments Value Summary <port-pair> Port pair to display (name or id) Table 72.84. Command arguments Value Summary -h, --help Show this help message and exit Table 72.85. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 72.86. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.87. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.88. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.23. sfc service graph create Create a service graph. Usage: Table 72.89. Positional arguments Value Summary <name> Name of the service graph. Table 72.90. Command arguments Value Summary -h, --help Show this help message and exit --description DESCRIPTION Description for the service graph. --branching-point SRC_CHAIN:DST_CHAIN_1,DST_CHAIN_2,DST_CHAIN_N Service graph branching point: the key is the source Port Chain while the value is a list of destination Port Chains. This option can be repeated. Table 72.91. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 72.92. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.93. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.94. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.24. sfc service graph delete Delete a given service graph. Usage: Table 72.95. Positional arguments Value Summary <service-graph> Id or name of the service graph to delete. Table 72.96. Command arguments Value Summary -h, --help Show this help message and exit 72.25. sfc service graph list List service graphs Usage: Table 72.97. Command arguments Value Summary -h, --help Show this help message and exit --long List additional fields in output Table 72.98. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 72.99. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 72.100. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.101. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 72.26. sfc service graph set Set service graph properties Usage: Table 72.102. Positional arguments Value Summary <service-graph> Service graph to modify (name or id) Table 72.103. Command arguments Value Summary -h, --help Show this help message and exit --name <name> Name of the service graph --description <description> Description for the service graph 72.27. sfc service graph show Show information of a given service graph. Usage: Table 72.104. Positional arguments Value Summary <service-graph> Id or name of the service graph to display. Table 72.105. Command arguments Value Summary -h, --help Show this help message and exit Table 72.106. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 72.107. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 72.108. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 72.109. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack sfc flow classifier create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--protocol <protocol>] [--ethertype {IPv4,IPv6}] [--source-port <min-port>:<max-port>] [--destination-port <min-port>:<max-port>] [--source-ip-prefix <source-ip-prefix>] [--destination-ip-prefix <destination-ip-prefix>] [--logical-source-port <logical-source-port>] [--logical-destination-port <logical-destination-port>] [--l7-parameters L7_PARAMETERS] <name>", "openstack sfc flow classifier delete [-h] <flow-classifier>", "openstack sfc flow classifier list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--long]", "openstack sfc flow classifier set [-h] [--name <name>] [--description <description>] <flow-classifier>", "openstack sfc flow classifier show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <flow-classifier>", "openstack sfc port chain create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--flow-classifier <flow-classifier>] [--chain-parameters correlation=<correlation-type>,symmetric=<boolean>] --port-pair-group <port-pair-group> <name>", "openstack sfc port chain delete [-h] <port-chain>", "openstack sfc port chain list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--long]", "openstack sfc port chain set [-h] [--name <name>] [--description <description>] [--flow-classifier <flow-classifier>] [--no-flow-classifier] [--port-pair-group <port-pair-group>] [--no-port-pair-group] <port-chain>", "openstack sfc port chain show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <port-chain>", "openstack sfc port chain unset [-h] [--flow-classifier <flow-classifier> | --all-flow-classifier] [--port-pair-group <port-pair-group>] <port-chain>", "openstack sfc port pair create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--service-function-parameters correlation=<correlation-type>,weight=<weight>] --ingress <ingress> --egress <egress> <name>", "openstack sfc port pair delete [-h] <port-pair>", "openstack sfc port pair group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--port-pair <port-pair>] [--enable-tap | --disable-tap] [--port-pair-group-parameters lb-fields=<lb-fields>] <name>", "openstack sfc port pair group delete [-h] <port-pair-group>", "openstack sfc port pair group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--long]", "openstack sfc port pair group set [-h] [--name <name>] [--description <description>] [--port-pair <port-pair>] [--no-port-pair] <port-pair-group>", "openstack sfc port pair group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <port-pair-group>", "openstack sfc port pair group unset [-h] [--port-pair <port-pair> | --all-port-pair] <port-pair-group>", "openstack sfc port pair list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--long]", "openstack sfc port pair set [-h] [--name <name>] [--description <description>] <port-pair>", "openstack sfc port pair show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <port-pair>", "openstack sfc service graph create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description DESCRIPTION] --branching-point SRC_CHAIN:DST_CHAIN_1,DST_CHAIN_2,DST_CHAIN_N <name>", "openstack sfc service graph delete [-h] <service-graph>", "openstack sfc service graph list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--long]", "openstack sfc service graph set [-h] [--name <name>] [--description <description>] <service-graph>", "openstack sfc service graph show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <service-graph>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/sfc
Chapter 8. LDAP Authentication Tutorial
Chapter 8. LDAP Authentication Tutorial Abstract This tutorial explains how to set up an X.500 directory server and configure the OSGi container to use LDAP authentication. 8.1. Tutorial Overview Goals In this tutorial you will: Install 389 Directory Server Add user entries to the LDAP server Add groups to manage security roles Configure Fuse to use LDAP authentication Configure Fuse to use roles for authorization Configure SSL/TLS connections to the LDAP server 8.2. Set-up a Directory Server and Console This stage of the tutorial explains how to install the X.500 directory server and the management console from the Fedora 389 Directory Server project. If you already have access to a 389 Directory Server instance, you can skip the instructions for installing the 389 Directory Server and install the 389 Management Console instead. Prerequisites If you are installing on a Red Hat Enterprise Linux platform, you must first install the Extra Packages for Enterprise Linux (EPEL) . See the installation notes under RHEL/Cent OS/ EPEL ( RHEL 6, RHEL 7, Cent OS 6, Cent OSy7) on the fedoraproject.org site. Install 389 Directory Server If you do not have access to an existing 389 Directory Server instance, you can install 389 Directory Server on your local machine, as follows: On Red Hat Enterprise Linux and Fedora platforms, use the standard dnf package management utility to install 389 Directory Server . Enter the following command at a command prompt (you must have administrator privileges on your machine): Note The required 389-ds and 389-console RPM packages are available for Fedora, RHEL6+EPEL, and CentOS7+EPEL platforms. At the time of writing, the 389-console package is not yet available for RHEL 7. After installing the 389 directory server packages, enter the following command to configure the directory server: The script is interactive and prompts you to provide the basic configuration settings for the 389 directory server. When the script is complete, it automatically launches the 389 directory server in the background. For more details about how to install 389 Directory Server , see the Download page. Install 389 Management Console If you already have access to a 389 Directory Server instance, you only need to install the 389 Management Console, which enables you to log in and manage the server remotely. You can install the 389 Management Console, as follows: On Red Hat Enterprise Linux and Fedora platforms -use the standard dnf package management utility to install the 389 Management Console. Enter the following command at a command prompt (you must have administrator privileges on your machine): On Windows platforms -see the Windows Console download instructions from fedoraproject.org . Connect the console to the server To connect the 389 Directory Server Console to the LDAP server: Enter the following command to start up the 389 Management Console: A login dialog appears. Fill in the LDAP login credentials in the User ID and Password fields, and customize the hostname in the Administration URL field to connect to your 389 management server instance (port 9830 is the default port for the 389 management server instance). The 389 Management Console window appears. Select the Servers and Applications tab. In the left-hand pane, drill down to the Directory Server icon. Select the Directory Server icon in the left-hand pane and click Open , to open the 389 Directory Server Console . In the 389 Directory Server Console , click the Directory tab, to view the Directory Information Tree (DIT). Expand the root node, YourDomain (usually named after a hostname, and shown as localdomain in the following screenshot), to view the DIT. 8.3. Add User Entries to the Directory Server The basic prerequisite for using LDAP authentication with the OSGi container is to have an X.500 directory server running and configured with a collection of user entries. For many use cases, you will also want to configure a number of groups to manage user roles. Alternative to adding user entries If you already have user entries and groups defined in your LDAP server, you might prefer to map the existing LDAP groups to JAAS roles using the roles.mapping property in the LDAPLoginModule configuration, instead of creating new entries. For details, see Section 2.1.7, "JAAS LDAP Login Module" . Goals In this portion of the tutorial you will Add three user entries to the LDAP server Add four groups to the LDAP server Adding user entries Perform the following steps to add user entries to the directory server: Ensure that the LDAP server and console are running. See Section 8.2, "Set-up a Directory Server and Console" . In the Directory Server Console , click on the Directory tab, and drill down to the People node, under the YourDomain node (where YourDomain is shown as localdomain in the following screenshots). Right-click the People node, and select menu:[ > New > > User > ] from the context menu, to open the Create New User dialog. Select the User tab in the left-hand pane of the Create New User dialog. Fill in the fields of the User tab, as follows: Set the First Name field to John . Set the Last Name field to Doe . Set the User ID field to jdoe . Enter the password, secret , in the Password field. Enter the password, secret , in the Confirm Password field. Click OK . Add a user Jane Doe by following Step 3 to Step 6 . In Step 5.e , use janedoe for the new user's User ID and use the password, secret , for the password fields. Add a user Camel Rider by following Step 3 to Step 6 . In Step 5.e , use crider for the new user's User ID and use the password, secret , for the password fields. Adding groups for the roles To add the groups that define the roles: In the Directory tab of the Directory Server Console , drill down to the Groups node, under the YourDomain node. Right-click the Groups node, and select menu:[ > New > > Group > ] from the context menu, to open the Create New Group dialog. Select the General tab in the left-hand pane of the Create New Group dialog. Fill in the fields of the General tab, as follows: Set the Group Name field to admin . Optionally, enter a description in the Description field. Select the Members tab in the left-hand pane of the Create New Group dialog. Click Add to open the Search users and groups dialog. In the Search field, select Users from the drop-down menu, and click the Search button. From the list of users that is now displayed, select John Doe . Click OK , to close the Search users and groups dialog. Click OK , to close the Create New Group dialog. Add a manager role by following Step 2 to Step 10 . In Step 4 , enter manager in the Group Name field. In Step 8 , select Jane Doe . Add a viewer role by following Step 2 to Step 10 . In Step 4 , enter viewer in the Group Name field. In Step 8 , select Camel Rider . Add an ssh role by following Step 2 to Step 10 . In Step 4 , enter ssh in the Group Name field. In Step 8 , select all of the users, John Doe , Jane Doe , and Camel Rider . 8.4. Enable LDAP Authentication in the OSGi Container This section explains how to configure an LDAP realm in the OSGi container. The new realm overrides the default karaf realm, so that the container authenticates credentials based on user entries stored in the X.500 directory server. References More detailed documentation is available on LDAP authentication, as follows: LDAPLoginModule options -are described in detail in Section 2.1.7, "JAAS LDAP Login Module" . Configurations for other directory servers -this tutorial covers only 389-DS . For details of how to configure other directory servers, such as Microsoft Active Directory, see the section called "Filter settings for different directory servers" . Procedure for standalone OSGi container To enable LDAP authentication in a standalone OSGi container: Ensure that the X.500 directory server is running. Start the Karaf container by entering the following command in a terminal window: Create a file called ldap-module.xml . Copy Example 8.1, "JAAS Realm for Standalone" into ldap-module.xml . Example 8.1. JAAS Realm for Standalone <?xml version="2.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.0.0" xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"> <jaas:config name="karaf" rank="200"> <jaas:module className="org.apache.karaf.jaas.modules.ldap.LDAPLoginModule" flags="required"> initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory connection.url=ldap://localhost:389 connection.username=cn=Directory Manager connection.password=DIRECTORY_MANAGER_PASSWORD connection.protocol= user.base.dn=ou=People,dc=localdomain user.filter=(&amp;(objectClass=inetOrgPerson)(uid=%u)) user.search.subtree=true role.base.dn=ou=Groups,dc=localdomain role.name.attribute=cn role.filter=(uniquemember=%fqdn) role.search.subtree=true authentication=simple </jaas:module> </jaas:config> </blueprint> You must customize the following settings in the ldap-module.xml file: connection.url Set this URL to the actual location of your directory server instance. Normally, this URL has the format, ldap:// Hostname : Port . For example, the default port for the 389 Directory Server is IP port 389 . connection.username Specifies the username that is used to authenticate the connection to the directory server. For 389 Directory Server, the default is usually cn=Directory Manager . connection.password Specifies the password part of the credentials for connecting to the directory server. authentication You can specify either of the following alternatives for the authentication protocol: simple implies that user credentials are supplied and you are obliged to set the connection.username and connection.password options in this case. none implies that authentication is not performed. You must not set the connection.username and connection.password options in this case. This login module creates a JAAS realm called karaf , which is the same name as the default JAAS realm used by Fuse. By redefining this realm with a rank attribute value greater than 0 , it overrides the standard karaf realm which has the rank 0 . For more details about how to configure Fuse to use LDAP, see Section 2.1.7, "JAAS LDAP Login Module" . Important When setting the JAAS properties above, do not enclose the property values in double quotes. To deploy the new LDAP module, copy the ldap-module.xml into the Karaf container's deploy/ directory (hot deploy). The LDAP module is automatically activated. Note Subsequently, if you need to undeploy the LDAP module, you can do so by deleting the ldap-module.xml file from the deploy/ directory while the Karaf container is running . Test the LDAP authentication Test the new LDAP realm by connecting to the running container using the Karaf client utility, as follows: Open a new command prompt. Change directory to the Karaf InstallDir /bin directory. Enter the following command to log on to the running container instance using the identity jdoe : You should successfully log into the container's remote console. At the command console, type jaas: followed by the [Tab] key (to activate content completion): You should see that jdoe has access to all of the jaas commands (consistent with the admin role). Log off the remote console by entering the logout command. Enter the following command to log on to the running container instance using the identity janedoe : You should successfully log into the container's remote console. At the command console, type jaas: followed by the [Tab] key (to activate content completion): You should see that janedoe has access to almost all of the jaas commands (consistent with the manager role). Log off the remote console by entering the logout command. Enter the following command to log on to the running container instance using the identity crider : You should successfully log into the container's remote console. At the command console, type jaas: followed by the [Tab] key (to activate content completion): You should see that crider has access to only five of the jaas commands (consistent with the viewer role). Log off the remote console by entering the logout command. Troubleshooting If you run into any difficulties while testing the LDAP connection, increase the logging level to DEBUG to get a detailed trace of what is happening on the connection to the LDAP server. Perform the following steps: From the Karaf console, enter the following command to increase the logging level to DEBUG : Observe the Karaf log in real time: To escape from the log listing, type Ctrl-C.
[ "sudo dnf install 389-ds", "sudo setup-ds-admin.pl", "sudo dnf install 389-console", "389-console", "./bin/fuse", "<?xml version=\"2.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:jaas=\"http://karaf.apache.org/xmlns/jaas/v1.0.0\" xmlns:ext=\"http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0\"> <jaas:config name=\"karaf\" rank=\"200\"> <jaas:module className=\"org.apache.karaf.jaas.modules.ldap.LDAPLoginModule\" flags=\"required\"> initialContextFactory=com.sun.jndi.ldap.LdapCtxFactory connection.url=ldap://localhost:389 connection.username=cn=Directory Manager connection.password=DIRECTORY_MANAGER_PASSWORD connection.protocol= user.base.dn=ou=People,dc=localdomain user.filter=(&amp;(objectClass=inetOrgPerson)(uid=%u)) user.search.subtree=true role.base.dn=ou=Groups,dc=localdomain role.name.attribute=cn role.filter=(uniquemember=%fqdn) role.search.subtree=true authentication=simple </jaas:module> </jaas:config> </blueprint>", "./client -u jdoe -p secret", "jdoe@root()> jaas: Display all 31 possibilities? (31 lines)? jaas:cancel jaas:group-add jaas:whoami", "./client -u janedoe -p secret", "janedoe@root()> jaas: Display all 25 possibilities? (25 lines)? jaas:cancel jaas:group-add jaas:users", "./client -u crider -p secret", "crider@root()> jaas: jaas:manage jaas:realm-list jaas:realm-manage jaas:realms jaas:user-list jaas:users", "log:set DEBUG", "log:tail" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_security_guide/fesbldaptutorial
Getting Started with Camel K
Getting Started with Camel K Red Hat build of Apache Camel K 1.10.9 Develop and run your first Camel K application Red Hat build of Apache Camel K Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/getting_started_with_camel_k/index
Chapter 1. Activating Red Hat Ansible Automation Platform
Chapter 1. Activating Red Hat Ansible Automation Platform Red Hat Ansible Automation Platform uses available subscriptions or a subscription manifest to authorize the use of Ansible Automation Platform. To obtain a subscription, you can do either of the following: Use your Red Hat customer or Satellite credentials when you launch Ansible Automation Platform. Upload a subscriptions manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible playbook. 1.1. Activate with credentials When Ansible Automation Platform launches for the first time, the Ansible Automation Platform Subscription screen automatically displays. You can use your Red Hat credentials to retrieve and import your subscription directly into Ansible Automation Platform. You are opted in for Automation Analytics by default when you activate the platform on first time log in. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out, after activating Ansible Automation Platform, by doing the following: From the navigation panel, select menu:[Settings] and select the Miscellaneous System settings option. Click Edit . Toggle the Gather data for Automation Analytics switch to the off position. Click Save . Procedures Enter your Red Hat username and password. Click Get Subscriptions . Note You can also use your Satellite username and password if your cluster nodes are registered to Satellite through Subscription Manager. Review the End User License Agreement and select I agree to the End User License Agreement . Click Submit . Once your subscription has been accepted, the license screen displays and navigates you to the Dashboard of the Ansible Automation Platform interface. You can return to the license screen by clicking the Settings icon ⚙ and selecting the License option from the Settings screen. 1.2. Activate with a manifest file If you have a subscriptions manifest, you can upload the manifest file either by using the Red Hat Ansible Automation Platform interface or manually in an Ansible Playbook. You are opted in for Automation Analytics by default when you activate the platform on first time log in. This helps Red Hat improve the product by delivering you a much better user experience. You can opt out, after activating Ansible Automation Platform, by doing the following: From the navigation panel, select menu:[Settings] and select the Miscellaneous System settings option. Click Edit . Toggle the Gather data for Automation Analytics switch to the off position. Click Save . Prerequisites You must have a Red Hat Subscription Manifest file exported from the Red Hat Customer Portal. For more information, see Obtaining a manifest file . Uploading with the interface Complete steps to generate and download the manifest file Log in to Red Hat Ansible Automation Platform. If you are not immediately prompted for a manifest file, go to Settings and select the License option. Make sure the Username and Password fields are empty. Click Browse and select the manifest file. Click . Review the End User License Agreement and select I agree to the End User License Agreement . Click Submit . Once your subscription has been accepted, the license screen displays and navigates you to the Dashboard of the Ansible Automation Platform interface. You can return to the license screen by clicking the Settings icon ⚙ and selecting the License option from the Settings screen. Note If the BROWSE button is disabled on the License page, clear the USERNAME and PASSWORD fields. Uploading manually If you are unable to apply or update the subscription information by using the Red Hat Ansible Automation Platform interface, you can upload the subscriptions manifest manually in an Ansible Playbook by using the license module in the ansible.controller collection. - name: Set the license using a file license: manifest: "/tmp/my_manifest.zip"
[ "- name: Set the license using a file license: manifest: \"/tmp/my_manifest.zip\"" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_operations_guide/assembly-aap-activate
Chapter 3. Deploy standalone Multicloud Object Gateway
Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.9 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Note We recommend using all default settings. Changing it may result in unexpected behavior. Alter only if you are aware of its result. Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Operators and verify if OpenShift Data Foundation is available. Important In case the console plugin option was not automatically enabled after you installed the OpenShift Data Foundation Operator, you need to enable it. For more information on how to enable the console plugin, see Enabling the Red Hat OpenShift Data Foundation console plugin . 3.2. Creating standalone Multicloud Object Gateway Use this section to create only the Multicloud Object Gateway component with OpenShift Data Foundation. Prerequisites Ensure that OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. Ensure that you have a storage class and is set as the default. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, expand Advanced . Select Multicloud Object Gateway for Deployment type . Click . Optional: In the Security page, select Connect to an external key management service . Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> '), Port number , and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage OpenShift Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verify the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node)
[ "oc annotate namespace openshift-storage openshift.io/node-selector=" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/deploy-standalone-multicloud-object-gateway
Chapter 24. OpenShiftControllerManager [operator.openshift.io/v1]
Chapter 24. OpenShiftControllerManager [operator.openshift.io/v1] Description OpenShiftControllerManager provides information to configure an operator to manage openshift-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 24.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object status object 24.1.1. .spec Description Type object Property Type Description logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". unsupportedConfigOverrides `` unsupportedConfigOverrides overrides the final configuration that was computed by the operator. Red Hat does not support the use of this field. Misuse of this field could lead to unexpected behavior or conflict with other configuration options. Seek guidance from the Red Hat support before using this field. Use of this property blocks cluster upgrades, it must be removed before upgrading your cluster. 24.1.2. .status Description Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 24.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 24.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 24.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 24.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 24.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/openshiftcontrollermanagers DELETE : delete collection of OpenShiftControllerManager GET : list objects of kind OpenShiftControllerManager POST : create an OpenShiftControllerManager /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name} DELETE : delete an OpenShiftControllerManager GET : read the specified OpenShiftControllerManager PATCH : partially update the specified OpenShiftControllerManager PUT : replace the specified OpenShiftControllerManager /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name}/status GET : read status of the specified OpenShiftControllerManager PATCH : partially update status of the specified OpenShiftControllerManager PUT : replace status of the specified OpenShiftControllerManager 24.2.1. /apis/operator.openshift.io/v1/openshiftcontrollermanagers HTTP method DELETE Description delete collection of OpenShiftControllerManager Table 24.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OpenShiftControllerManager Table 24.2. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManagerList schema 401 - Unauthorized Empty HTTP method POST Description create an OpenShiftControllerManager Table 24.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.4. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 24.5. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 202 - Accepted OpenShiftControllerManager schema 401 - Unauthorized Empty 24.2.2. /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name} Table 24.6. Global path parameters Parameter Type Description name string name of the OpenShiftControllerManager HTTP method DELETE Description delete an OpenShiftControllerManager Table 24.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 24.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OpenShiftControllerManager Table 24.9. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OpenShiftControllerManager Table 24.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.11. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OpenShiftControllerManager Table 24.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.13. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 24.14. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 401 - Unauthorized Empty 24.2.3. /apis/operator.openshift.io/v1/openshiftcontrollermanagers/{name}/status Table 24.15. Global path parameters Parameter Type Description name string name of the OpenShiftControllerManager HTTP method GET Description read status of the specified OpenShiftControllerManager Table 24.16. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified OpenShiftControllerManager Table 24.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.18. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified OpenShiftControllerManager Table 24.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.20. Body parameters Parameter Type Description body OpenShiftControllerManager schema Table 24.21. HTTP responses HTTP code Reponse body 200 - OK OpenShiftControllerManager schema 201 - Created OpenShiftControllerManager schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/operator_apis/openshiftcontrollermanager-operator-openshift-io-v1
Preface
Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/developing_and_managing_integrations_using_camel_k/pr01
Chapter 1. Introduction to hardening Ansible Automation Platform
Chapter 1. Introduction to hardening Ansible Automation Platform This document provides guidance for improving the security posture (referred to as "hardening" throughout this guide) of your Red Hat Ansible Automation Platform deployment on Red Hat Enterprise Linux. The following are not currently within the scope of this guide: Other deployment targets for Ansible Automation Platform, such as OpenShift. Ansible Automation Platform managed services available through cloud service provider marketplaces. Note Hardening and compliance for Ansible Automation Platform 2.4 includes additional considerations with regards to the specific Defense Security Information Agency (DISA) Security Technical Implementation Guides (STIGs) for automation controller, but this guidance does not apply to Ansible Automation Platform 2.5. This guide takes a practical approach to hardening the Ansible Automation Platform security posture, starting with the planning and architecture phase of deployment and then covering specific guidance for installation, initial configuration, and day 2 operations. As this guide specifically covers Ansible Automation Platform running on Red Hat Enterprise Linux, hardening guidance for Red Hat Enterprise Linux will be covered where it affects the automation platform components. Additional considerations with regards to the DISA STIGs for Red Hat Enterprise Linux are provided for those organizations that integrate the DISA STIGs as a part of their overall security strategy. Note These recommendations do not guarantee security or compliance of your deployment of Ansible Automation Platform. You must assess security from the unique requirements of your organization to address specific threats and risks and balance these against implementation factors. 1.1. Audience This guide is written for personnel responsible for installing, configuring, and maintaining Ansible Automation Platform 2.5 when deployed on Red Hat Enterprise Linux. Additional information is provided for security operations, compliance assessment, and other functions associated with related security processes. 1.2. Overview of Ansible Automation Platform Ansible is an open source, command-line IT automation software application written in Python. You can use Ansible Automation Platform to configure systems, deploy software, and orchestrate advanced workflows to support application deployment, system updates, and more. Ansible's main strengths are simplicity and ease of use. It also has a strong focus on security and reliability, featuring minimal moving parts. It uses secure, well-known communication protocols like SSH, HTTPS, and WinRM for transport and uses a human-readable language that is designed for getting started quickly without extensive training. Ansible Automation Platform enhances the Ansible language with enterprise-class features, such as Role-Based Access Controls (RBAC), centralized logging and auditing, credential management, job scheduling, and complex automation workflows. With Ansible Automation Platform you get certified content from our robust partner ecosystem; added security; reporting, and analytics, as well as life cycle technical support to scale automation across your organization. Ansible Automation Platform simplifies the development and operation of automation workloads for managing enterprise application infrastructure life cycles. It works across multiple IT domains including operations, networking, security, and development, as well as across diverse hybrid environments. 1.2.1. Red Hat Ansible Automation Platform deployment methods There are three different installation methods for Ansible Automation Platform: RPM-based on Red Hat Enterprise Linux Container-based on Red Hat Enterprise Linux Operator-based on Red Hat OpenShift Container Platform This document offers guidance on hardening Ansible Automation Platform when installed using either of the first two installation methods (RPM-based or container-based). This document further recommends using the container-based installation method for new deployments, as the RPM-based installer will be deprecated in a future release. For further information, see Deprecated features . Operator-based deployments are not described in this document. 1.2.2. Ansible Automation Platform components Ansible Automation Platform is a modular platform composed of separate components that can be connected together, including automation controller, platform gateway, automation hub, and Event-Driven Ansible controller. Additional resources For more information about the components provided within Ansible Automation Platform, see Red Hat Ansible Automation Platform components in Planning your installation .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/hardening_and_compliance/assembly-intro-to-aap-hardening
Chapter 1. Overview of jlink
Chapter 1. Overview of jlink Jlink is a Java command line tool that is used to generate a custom Java runtime environment (JRE). You can use your customized JRE to run Java applications. Using jlink, you can create a custom runtime environment that only includes the relevant class file.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_jlink_to_customize_java_runtime_environment/jlink-overview
2.4. Uninstalling an IdM Server
2.4. Uninstalling an IdM Server Note At domain level 0 , the procedure is different. See Section D.3.6, "Removing a Replica" . Prerequisites Before uninstalling a server that serves as a certificate authority (CA), key recovery authority (KRA), or DNS Security Extensions (DNSSEC) server, make sure these services are running on another server in the domain. Warning Removing the last replica that serves as a CA, KRA, or DNSSEC server can seriously disrupt the Identity Management functionality. Procedure To uninstall server.example.com : On another server, use the ipa server-del command to delete server.example.com from the topology: On server.example.com , use the ipa-server-install --uninstall command: Make sure all name server (NS) DNS records pointing to server.example.com are deleted from your DNS zones. This applies regardless of whether you use integrated DNS managed by IdM or external DNS.
[ "ipa server-del server.example.com", "ipa-server-install --uninstall" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/uninstalling_ipa_servers
Chapter 31. JMS
Chapter 31. JMS Both producer and consumer are supported This component allows messages to be sent to (or consumed from) a JMS Queue or Topic. It uses Spring's JMS support for declarative transactions, including Spring's JmsTemplate for sending and a MessageListenerContainer for consuming. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jms</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> Note Using ActiveMQ If you are using Apache ActiveMQ , you should prefer the ActiveMQ component as it has been optimized for ActiveMQ. All of the options and samples on this page are also valid for the ActiveMQ component. Note Transacted and caching See section Transactions and Cache Levels below if you are using transactions with JMS as it can impact performance. Note Request/Reply over JMS Make sure to read the section Request-reply over JMS further below on this page for important notes about request/reply, as Camel offers a number of options to configure for performance, and clustered environments. 31.1. URI format Where destinationName is a JMS queue or topic name. By default, the destinationName is interpreted as a queue name. For example, to connect to the queue, FOO.BAR use: You can include the optional queue: prefix, if you prefer: To connect to a topic, you must include the topic: prefix. For example, to connect to the topic, Stocks.Prices , use: You append query options to the URI by using the following format, ?option=value&option=value&... 31.1.1. Using ActiveMQ The JMS component reuses Spring 2's JmsTemplate for sending messages. This is not ideal for use in a non-J2EE container and typically requires some caching in the JMS provider to avoid poor performance . If you intend to use Apache ActiveMQ as your message broker, the recommendation is that you do one of the following: Use the ActiveMQ component, which is already optimized to use ActiveMQ efficiently Use the PoolingConnectionFactory in ActiveMQ. 31.1.2. Transactions and Cache Levels If you are consuming messages and using transactions ( transacted=true ) then the default settings for cache level can impact performance. If you are using XA transactions then you cannot cache as it can cause the XA transaction to not work properly. If you are not using XA, then you should consider caching as it speeds up performance, such as setting cacheLevelName=CACHE_CONSUMER . The default setting for cacheLevelName is CACHE_AUTO . This default auto detects the mode and sets the cache level accordingly to: CACHE_CONSUMER if transacted=false CACHE_NONE if transacted=true So you can say the default setting is conservative. Consider using cacheLevelName=CACHE_CONSUMER if you are using non-XA transactions. 31.1.3. Durable Subscriptions If you wish to use durable topic subscriptions, you need to specify both clientId and durableSubscriptionName . The value of the clientId must be unique and can only be used by a single JMS connection instance in your entire network. You may prefer to use Virtual Topics instead to avoid this limitation. More background on durable messaging here . 31.1.4. Message Header Mapping When using message headers, the JMS specification states that header names must be valid Java identifiers. So try to name your headers to be valid Java identifiers. One benefit of doing this is that you can then use your headers inside a JMS Selector (whose SQL92 syntax mandates Java identifier syntax for headers). A simple strategy for mapping header names is used by default. The strategy is to replace any dots and hyphens in the header name as shown below and to reverse the replacement when the header name is restored from a JMS message sent over the wire. What does this mean? No more losing method names to invoke on a bean component, no more losing the filename header for the File Component, and so on. The current header name strategy for accepting header names in Camel is as follows: Dots are replaced by `DOT` and the replacement is reversed when Camel consume the message Hyphen is replaced by `HYPHEN` and the replacement is reversed when Camel consumes the message You can configure many different properties on the JMS endpoint, which map to properties on the JMSConfiguration object. Note Mapping to Spring JMS Many of these properties map to properties on Spring JMS, which Camel uses for sending and receiving messages. So you can get more information about these properties by consulting the relevant Spring documentation. 31.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 31.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 31.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 31.3. Component Options The JMS component supports 98 options, which are listed below. Name Description Default Type clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String jmsMessageType (common) Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values: Bytes Map Object Stream Text JmsMessageType replyTo (common) Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values: SESSION_TRANSACTED CLIENT_ACKNOWLEDGE AUTO_ACKNOWLEDGE DUPS_OK_ACKNOWLEDGE AUTO_ACKNOWLEDGE String artemisConsumerPriority (consumer) Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). int asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION CACHE_AUTO String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyToDeliveryPersistent (consumer) Specifies whether to use persistent delivery by default for replies. true boolean selector (consumer) Sets the JMS selector to use. String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean acceptMessagesWhileStopping (consumer (advanced)) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuickStop (consumer (advanced)) Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean consumerType (consumer (advanced)) The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType defaultTaskExecutorType (consumer (advanced)) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values: ThreadPool SimpleAsync DefaultTaskExecutorType eagerLoadingOfProperties (consumer (advanced)) Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false boolean eagerPoisonBody (consumer (advanced)) If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String exposeListenerSession (consumer (advanced)) Specifies whether the listener session should be exposed when consuming messages. false boolean replyToSameDestinationAllowed (consumer (advanced)) Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false boolean taskExecutor (consumer (advanced)) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long deliveryMode (producer) Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values: 1 2 Integer deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean formatDateHeadersToIso8601 (producer) Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values: 1 2 3 4 5 6 7 8 9 4 int replyToConcurrentConsumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int replyToMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyToOnTimeoutMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int replyToOverride (producer) Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values: Temporary Shared Exclusive ReplyToType requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long allowAdditionalHeaders (producer (advanced)) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String allowNullBody (producer (advanced)) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean alwaysCopyMessage (producer (advanced)) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false boolean correlationProperty (producer (advanced)) When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String disableTimeToLive (producer (advanced)) Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false boolean forceSendOriginalMessage (producer (advanced)) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean includeSentJMSMessageID (producer (advanced)) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean replyToCacheLevelName (producer (advanced)) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION String replyToDestinationSelectorName (producer (advanced)) Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String streamMessageTypeEnabled (producer (advanced)) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean allowAutoWiredConnectionFactory (advanced) Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. true boolean allowAutoWiredDestinationResolver (advanced) Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. true boolean allowSerializedHeaders (advanced) Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean artemisStreamingEnabled (advanced) Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean configuration (advanced) To use a shared JMS configuration. JmsConfiguration destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values: default passthrough JmsKeyFormatStrategy mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true boolean messageListenerContainerFactory (advanced) Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. MessageListenerContainerFactory messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true boolean pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean queueBrowseStrategy (advanced) To use a custom QueueBrowseStrategy when browsing queues. QueueBrowseStrategy receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long requestTimeoutCheckerInterval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false boolean transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false boolean useMessageIDAsCorrelationID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean waitForProvisionCorrelationToBeUpdatedCounter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String transacted (transaction) Specifies whether to use transacted mode. false boolean transactedInOut (transaction) Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false boolean lazyCreateTransactionManager (transaction (advanced)) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction (advanced)) The Spring transaction manager to use. PlatformTransactionManager transactionName (transaction (advanced)) The name of the transaction to use. String transactionTimeout (transaction (advanced)) The timeout value of the transaction (in seconds), if using transacted mode. -1 int 31.4. Endpoint Options The JMS endpoint is configured using URI syntax: with the following path and query parameters: 31.4.1. Path Parameters (2 parameters) Name Description Default Type destinationType (common) The kind of destination to use. Enum values: queue topic temp-queue temp-topic queue String destinationName (common) Required Name of the queue or topic to use as destination. String 31.4.2. Query Parameters (95 parameters) Name Description Default Type clientId (common) Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String connectionFactory (common) The connection factory to be use. A connection factory must be configured either on the component or endpoint. ConnectionFactory disableReplyTo (common) Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false boolean durableSubscriptionName (common) The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String jmsMessageType (common) Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. Enum values: Bytes Map Object Stream Text JmsMessageType replyTo (common) Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String testConnectionOnStartup (common) Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false boolean acknowledgementModeName (consumer) The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. Enum values: SESSION_TRANSACTED CLIENT_ACKNOWLEDGE AUTO_ACKNOWLEDGE DUPS_OK_ACKNOWLEDGE AUTO_ACKNOWLEDGE String artemisConsumerPriority (consumer) Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). int asyncConsumer (consumer) Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false boolean autoStartup (consumer) Specifies whether the consumer container should auto-startup. true boolean cacheLevel (consumer) Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. int cacheLevelName (consumer) Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION CACHE_AUTO String concurrentConsumers (consumer) Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 int maxConcurrentConsumers (consumer) Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. int replyToDeliveryPersistent (consumer) Specifies whether to use persistent delivery by default for replies. true boolean selector (consumer) Sets the JMS selector to use. String subscriptionDurable (consumer) Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false boolean subscriptionName (consumer) Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String subscriptionShared (consumer) Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false boolean acceptMessagesWhileStopping (consumer (advanced)) Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false boolean allowReplyManagerQuickStop (consumer (advanced)) Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false boolean consumerType (consumer (advanced)) The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. Enum values: Simple Default Custom Default ConsumerType defaultTaskExecutorType (consumer (advanced)) Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. Enum values: ThreadPool SimpleAsync DefaultTaskExecutorType eagerLoadingOfProperties (consumer (advanced)) Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false boolean eagerPoisonBody (consumer (advanced)) If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern exposeListenerSession (consumer (advanced)) Specifies whether the listener session should be exposed when consuming messages. false boolean replyToSameDestinationAllowed (consumer (advanced)) Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false boolean taskExecutor (consumer (advanced)) Allows you to specify a custom task executor for consuming messages. TaskExecutor deliveryDelay (producer) Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 long deliveryMode (producer) Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Enum values: 1 2 Integer deliveryPersistent (producer) Specifies whether persistent delivery is used by default. true boolean explicitQosEnabled (producer) Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean formatDateHeadersToIso8601 (producer) Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean preserveMessageQos (producer) Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false boolean priority (producer) Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. Enum values: 1 2 3 4 5 6 7 8 9 4 int replyToConcurrentConsumers (producer) Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 int replyToMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. int replyToOnTimeoutMaxConcurrentConsumers (producer) Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 int replyToOverride (producer) Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String replyToType (producer) Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. Enum values: Temporary Shared Exclusive ReplyToType requestTimeout (producer) The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. 20000 long timeToLive (producer) When sending messages, specifies the time-to-live of the message (in milliseconds). -1 long allowAdditionalHeaders (producer (advanced)) This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String allowNullBody (producer (advanced)) Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true boolean alwaysCopyMessage (producer (advanced)) If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false boolean correlationProperty (producer (advanced)) When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String disableTimeToLive (producer (advanced)) Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false boolean forceSendOriginalMessage (producer (advanced)) When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false boolean includeSentJMSMessageID (producer (advanced)) Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false boolean replyToCacheLevelName (producer (advanced)) Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. Enum values: CACHE_AUTO CACHE_CONNECTION CACHE_CONSUMER CACHE_NONE CACHE_SESSION String replyToDestinationSelectorName (producer (advanced)) Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String streamMessageTypeEnabled (producer (advanced)) Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false boolean allowSerializedHeaders (advanced) Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false boolean artemisStreamingEnabled (advanced) Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false boolean asyncStartListener (advanced) Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false boolean asyncStopListener (advanced) Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false boolean destinationResolver (advanced) A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). DestinationResolver errorHandler (advanced) Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. ErrorHandler exceptionListener (advanced) Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. ExceptionListener headerFilterStrategy (advanced) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy idleConsumerLimit (advanced) Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 int idleTaskExecutionLimit (advanced) Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 int includeAllJMSXProperties (advanced) Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false boolean jmsKeyFormatStrategy (advanced) Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. Enum values: default passthrough JmsKeyFormatStrategy mapJmsMessage (advanced) Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true boolean maxMessagesPerTask (advanced) The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 int messageConverter (advanced) To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. MessageConverter messageCreatedStrategy (advanced) To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. MessageCreatedStrategy messageIdEnabled (advanced) When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true boolean messageListenerContainerFactory (advanced) Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. MessageListenerContainerFactory messageTimestampEnabled (advanced) Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true boolean pubSubNoLocal (advanced) Specifies whether to inhibit the delivery of messages published by its own connection. false boolean receiveTimeout (advanced) The timeout for receiving messages (in milliseconds). 1000 long recoveryInterval (advanced) Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. 5000 long requestTimeoutCheckerInterval (advanced) Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. 1000 long synchronous (advanced) Sets whether synchronous processing should be strictly used. false boolean transferException (advanced) If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false boolean transferExchange (advanced) You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false boolean useMessageIDAsCorrelationID (advanced) Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false boolean waitForProvisionCorrelationToBeUpdatedCounter (advanced) Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 int waitForProvisionCorrelationToBeUpdatedThreadSleepingTime (advanced) Interval in millis to sleep each time while waiting for provisional correlation id to be updated. 100 long errorHandlerLoggingLevel (logging) Allows to configure the default errorHandler logging level for logging uncaught exceptions. Enum values: TRACE DEBUG INFO WARN ERROR OFF WARN LoggingLevel errorHandlerLogStackTrace (logging) Allows to control whether stacktraces should be logged or not, by the default errorHandler. true boolean password (security) Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String username (security) Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String transacted (transaction) Specifies whether to use transacted mode. false boolean transactedInOut (transaction) Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false boolean lazyCreateTransactionManager (transaction (advanced)) If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true boolean transactionManager (transaction (advanced)) The Spring transaction manager to use. PlatformTransactionManager transactionName (transaction (advanced)) The name of the transaction to use. String transactionTimeout (transaction (advanced)) The timeout value of the transaction (in seconds), if using transacted mode. -1 int 31.5. Samples JMS is used in many examples for other components as well. But we provide a few samples below to get started. 31.5.1. Receiving from JMS In the following sample we configure a route that receives JMS messages and routes the message to a POJO: from("jms:queue:foo"). to("bean:myBusinessLogic"); You can of course use any of the EIP patterns so the route can be context based. For example, here's how to filter an order topic for the big spenders: from("jms:topic:OrdersTopic"). filter().method("myBean", "isGoldCustomer"). to("jms:queue:BigSpendersQueue"); 31.5.2. Sending to JMS In the sample below we poll a file folder and send the file content to a JMS topic. As we want the content of the file as a TextMessage instead of a BytesMessage , we need to convert the body to a String : from("file://orders"). convertBodyTo(String.class). to("jms:topic:OrdersTopic"); 31.5.3. Using Annotations Camel also has annotations so you can use POJO Consuming and POJO Producing. 31.5.4. Spring DSL sample The preceding examples use the Java DSL. Camel also supports Spring XML DSL. Here is the big spender sample using Spring DSL: <route> <from uri="jms:topic:OrdersTopic"/> <filter> <method ref="myBean" method="isGoldCustomer"/> <to uri="jms:queue:BigSpendersQueue"/> </filter> </route> 31.5.5. Other samples JMS appears in many of the examples for other components and EIP patterns, as well in this Camel documentation. So feel free to browse the documentation. 31.5.6. Using JMS as a Dead Letter Queue storing Exchange Normally, when using JMS as the transport, it only transfers the body and headers as the payload. If you want to use JMS with a Dead Letter Channel , using a JMS queue as the Dead Letter Queue, then normally the caused Exception is not stored in the JMS message. You can, however, use the transferExchange option on the JMS dead letter queue to instruct Camel to store the entire Exchange in the queue as a javax.jms.ObjectMessage that holds a org.apache.camel.support.DefaultExchangeHolder . This allows you to consume from the Dead Letter Queue and retrieve the caused exception from the Exchange property with the key Exchange.EXCEPTION_CAUGHT . The demo below illustrates this: // setup error handler to use JMS as queue and store the entire Exchange errorHandler(deadLetterChannel("jms:queue:dead?transferExchange=true")); Then you can consume from the JMS queue and analyze the problem: from("jms:queue:dead").to("bean:myErrorAnalyzer"); // and in our bean String body = exchange.getIn().getBody(); Exception cause = exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class); // the cause message is String problem = cause.getMessage(); 31.5.7. Using JMS as a Dead Letter Channel storing error only You can use JMS to store the cause error message or to store a custom body, which you can initialize yourself. The following example uses the Message Translator EIP to do a transformation on the failed exchange before it is moved to the JMS dead letter queue: // we sent it to a seda dead queue first errorHandler(deadLetterChannel("seda:dead")); // and on the seda dead queue we can do the custom transformation before its sent to the JMS queue from("seda:dead").transform(exceptionMessage()).to("jms:queue:dead"); Here we only store the original cause error message in the transform. You can, however, use any Expression to send whatever you like. For example, you can invoke a method on a Bean or use a custom processor. 31.6. Message Mapping between JMS and Camel Camel automatically maps messages between javax.jms.Message and org.apache.camel.Message . When sending a JMS message, Camel converts the message body to the following JMS message types: Body Type JMS Message Comment String javax.jms.TextMessage org.w3c.dom.Node javax.jms.TextMessage The DOM will be converted to String . Map javax.jms.MapMessage java.io.Serializable javax.jms.ObjectMessage byte[] javax.jms.BytesMessage java.io.File javax.jms.BytesMessage java.io.Reader javax.jms.BytesMessage java.io.InputStream javax.jms.BytesMessage java.nio.ByteBuffer javax.jms.BytesMessage When receiving a JMS message, Camel converts the JMS message to the following body type: JMS Message Body Type javax.jms.TextMessage String javax.jms.BytesMessage byte[] javax.jms.MapMessage Map<String, Object> javax.jms.ObjectMessage Object 31.6.1. Disabling auto-mapping of JMS messages You can use the mapJmsMessage option to disable the auto-mapping above. If disabled, Camel will not try to map the received JMS message, but instead uses it directly as the payload. This allows you to avoid the overhead of mapping and let Camel just pass through the JMS message. For instance, it even allows you to route javax.jms.ObjectMessage JMS messages with classes you do not have on the classpath. 31.6.2. Using a custom MessageConverter You can use the messageConverter option to do the mapping yourself in a Spring org.springframework.jms.support.converter.MessageConverter class. For example, in the route below we use a custom message converter when sending a message to the JMS order queue: from("file://inbox/order").to("jms:queue:order?messageConverter=#myMessageConverter"); You can also use a custom message converter when consuming from a JMS destination. 31.6.3. Controlling the mapping strategy selected You can use the jmsMessageType option on the endpoint URL to force a specific message type for all messages. In the route below, we poll files from a folder and send them as javax.jms.TextMessage as we have forced the JMS producer endpoint to use text messages: from("file://inbox/order").to("jms:queue:order?jmsMessageType=Text"); You can also specify the message type to use for each message by setting the header with the key CamelJmsMessageType . For example: from("file://inbox/order").setHeader("CamelJmsMessageType", JmsMessageType.Text).to("jms:queue:order"); The possible values are defined in the enum class, org.apache.camel.jms.JmsMessageType . 31.7. Message format when sending The exchange that is sent over the JMS wire must conform to the JMS Message spec . For the exchange.in.header the following rules apply for the header keys : Keys starting with JMS or JMSX are reserved. exchange.in.headers keys must be literals and all be valid Java identifiers (do not use dots in the key name). Camel replaces dots & hyphens and the reverse when when consuming JMS messages: . is replaced by `DOT` and the reverse replacement when Camel consumes the message. - is replaced by `HYPHEN` and the reverse replacement when Camel consumes the message. See also the option jmsKeyFormatStrategy , which allows use of your own custom strategy for formatting keys. For the exchange.in.header , the following rules apply for the header values : The values must be primitives or their counter objects (such as Integer , Long , Character ). The types, String , CharSequence , Date , BigDecimal and BigInteger are all converted to their toString() representation. All other types are dropped. Camel will log with category org.apache.camel.component.jms.JmsBinding at DEBUG level if it drops a given header value. For example: 31.8. Message format when receiving Camel adds the following properties to the Exchange when it receives a message: Property Type Description org.apache.camel.jms.replyDestination javax.jms.Destination The reply destination. Camel adds the following JMS properties to the In message headers when it receives a JMS message: Header Type Description JMSCorrelationID String The JMS correlation ID. JMSDeliveryMode int The JMS delivery mode. JMSDestination javax.jms.Destination The JMS destination. JMSExpiration long The JMS expiration. JMSMessageID String The JMS unique message ID. JMSPriority int The JMS priority (with 0 as the lowest priority and 9 as the highest). JMSRedelivered boolean Is the JMS message redelivered. JMSReplyTo javax.jms.Destination The JMS reply-to destination. JMSTimestamp long The JMS timestamp. JMSType String The JMS type. JMSXGroupID String The JMS group ID. As all the above information is standard JMS you can check the JMS documentation for further details. 31.9. About using Camel to send and receive messages and JMSReplyTo The JMS component is complex and you have to pay close attention to how it works in some cases. So this is a short summary of some of the areas/pitfalls to look for. When Camel sends a message using its JMSProducer , it checks the following conditions: The message exchange pattern, Whether a JMSReplyTo was set in the endpoint or in the message headers, Whether any of the following options have been set on the JMS endpoint: disableReplyTo , preserveMessageQos , explicitQosEnabled . All this can be a tad complex to understand and configure to support your use case. 31.9.1. JmsProducer The JmsProducer behaves as follows, depending on configuration: Exchange Pattern Other options Description InOut - Camel will expect a reply, set a temporary JMSReplyTo , and after sending the message, it will start to listen for the reply message on the temporary queue. InOut JMSReplyTo is set Camel will expect a reply and, after sending the message, it will start to listen for the reply message on the specified JMSReplyTo queue. InOnly - Camel will send the message and not expect a reply. InOnly JMSReplyTo is set By default, Camel discards the JMSReplyTo destination and clears the JMSReplyTo header before sending the message. Camel then sends the message and does not expect a reply. Camel logs this in the log at WARN level (changed to DEBUG level from Camel 2.6 onwards. You can use preserveMessageQuo=true to instruct Camel to keep the JMSReplyTo . In all situations the JmsProducer does not expect any reply and thus continue after sending the message. 31.9.2. JmsConsumer The JmsConsumer behaves as follows, depending on configuration: Exchange Pattern Other options Description InOut - Camel will send the reply back to the JMSReplyTo queue. InOnly - Camel will not send a reply back, as the pattern is InOnly . - disableReplyTo=true This option suppresses replies. So pay attention to the message exchange pattern set on your exchanges. If you send a message to a JMS destination in the middle of your route you can specify the exchange pattern to use, see more at Request Reply. This is useful if you want to send an InOnly message to a JMS topic: from("activemq:queue:in") .to("bean:validateOrder") .to(ExchangePattern.InOnly, "activemq:topic:order") .to("bean:handleOrder"); 31.10. Reuse endpoint and send to different destinations computed at runtime If you need to send messages to a lot of different JMS destinations, it makes sense to reuse a JMS endpoint and specify the real destination in a message header. This allows Camel to reuse the same endpoint, but send to different destinations. This greatly reduces the number of endpoints created and economizes on memory and thread resources. You can specify the destination in the following headers: Header Type Description CamelJmsDestination javax.jms.Destination A destination object. CamelJmsDestinationName String The destination name. For example, the following route shows how you can compute a destination at run time and use it to override the destination appearing in the JMS URL: from("file://inbox") .to("bean:computeDestination") .to("activemq:queue:dummy"); The queue name, dummy , is just a placeholder. It must be provided as part of the JMS endpoint URL, but it will be ignored in this example. In the computeDestination bean, specify the real destination by setting the CamelJmsDestinationName header as follows: public void setJmsHeader(Exchange exchange) { String id = .... exchange.getIn().setHeader("CamelJmsDestinationName", "order:" + id"); } Then Camel will read this header and use it as the destination instead of the one configured on the endpoint. So, in this example Camel sends the message to activemq:queue:order:2 , assuming the id value was 2. If both the CamelJmsDestination and the CamelJmsDestinationName headers are set, CamelJmsDestination takes priority. Keep in mind that the JMS producer removes both CamelJmsDestination and CamelJmsDestinationName headers from the exchange and do not propagate them to the created JMS message in order to avoid the accidental loops in the routes (in scenarios when the message will be forwarded to the another JMS endpoint). 31.11. Configuring different JMS providers You can configure your JMS provider in Spring XML as follows: Basically, you can configure as many JMS component instances as you wish and give them a unique name using the id attribute . The preceding example configures an activemq component. You could do the same to configure MQSeries, TibCo, BEA, Sonic and so on. Once you have a named JMS component, you can then refer to endpoints within that component using URIs. For example for the component name, activemq , you can then refer to destinations using the URI format, activemq:[queue:|topic:]destinationName . You can use the same approach for all other JMS providers. This works by the SpringCamelContext lazily fetching components from the spring context for the scheme name you use for Endpoint URIs and having the Component resolve the endpoint URIs. 31.11.1. Using JNDI to find the ConnectionFactory If you are using a J2EE container, you might need to look up JNDI to find the JMS ConnectionFactory rather than use the usual <bean> mechanism in Spring. You can do this using Spring's factory bean or the new Spring XML namespace. For example: <bean id="weblogic" class="org.apache.camel.component.jms.JmsComponent"> <property name="connectionFactory" ref="myConnectionFactory"/> </bean> <jee:jndi-lookup id="myConnectionFactory" jndi-name="jms/connectionFactory"/> See The jee schema in the Spring reference documentation for more details about JNDI lookup. 31.12. Concurrent Consuming A common requirement with JMS is to consume messages concurrently in multiple threads in order to make an application more responsive. You can set the concurrentConsumers option to specify the number of threads servicing the JMS endpoint, as follows: from("jms:SomeQueue?concurrentConsumers=20"). bean(MyClass.class); You can configure this option in one of the following ways: On the JmsComponent , On the endpoint URI or, By invoking setConcurrentConsumers() directly on the JmsEndpoint . 31.12.1. Concurrent Consuming with async consumer Notice that each concurrent consumer will only pickup the available message from the JMS broker, when the current message has been fully processed. You can set the option asyncConsumer=true to let the consumer pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). See more details in the table on top of the page about the asyncConsumer option. from("jms:SomeQueue?concurrentConsumers=20&asyncConsumer=true"). bean(MyClass.class); 31.13. Request-reply over JMS Camel supports Request Reply over JMS. In essence the MEP of the Exchange should be InOut when you send a message to a JMS queue. Camel offers a number of options to configure request/reply over JMS that influence performance and clustered environments. The table below summaries the options. Option Performance Cluster Description Temporary Fast Yes A temporary queue is used as reply queue, and automatic created by Camel. To use this do not specify a replyTo queue name. And you can optionally configure replyToType=Temporary to make it stand out that temporary queues are in use. Shared Slow Yes A shared persistent queue is used as reply queue. The queue must be created beforehand, although some brokers can create them on the fly such as Apache ActiveMQ. To use this you must specify the replyTo queue name. And you can optionally configure replyToType=Shared to make it stand out that shared queues are in use. A shared queue can be used in a clustered environment with multiple nodes running this Camel application at the same time. All using the same shared reply queue. This is possible because JMS Message selectors are used to correlate expected reply messages; this impacts performance though. JMS Message selectors is slower, and therefore not as fast as Temporary or Exclusive queues. See further below how to tweak this for better performance. Exclusive Fast No (*Yes) An exclusive persistent queue is used as reply queue. The queue must be created beforehand, although some brokers can create them on the fly such as Apache ActiveMQ. To use this you must specify the replyTo queue name. And you must configure replyToType=Exclusive to instruct Camel to use exclusive queues, as Shared is used by default, if a replyTo queue name was configured. When using exclusive reply queues, then JMS Message selectors are not in use, and therefore other applications must not use this queue as well. An exclusive queue cannot be used in a clustered environment with multiple nodes running this Camel application at the same time; as we do not have control if the reply queue comes back to the same node that sent the request message; that is why shared queues use JMS Message selectors to make sure of this. Though if you configure each Exclusive reply queue with an unique name per node, then you can run this in a clustered environment. As then the reply message will be sent back to that queue for the given node, that awaits the reply message. concurrentConsumers Fast Yes Allows to process reply messages concurrently using concurrent message listeners in use. You can specify a range using the concurrentConsumers and maxConcurrentConsumers options. Notice: That using Shared reply queues may not work as well with concurrent listeners, so use this option with care. maxConcurrentConsumers Fast Yes Allows to process reply messages concurrently using concurrent message listeners in use. You can specify a range using the concurrentConsumers and maxConcurrentConsumers options. Notice: That using Shared reply queues may not work as well with concurrent listeners, so use this option with care. The JmsProducer detects the InOut and provides a JMSReplyTo header with the reply destination to be used. By default Camel uses a temporary queue, but you can use the replyTo option on the endpoint to specify a fixed reply queue (see more below about fixed reply queue). Camel will automatically setup a consumer which listen on the reply queue, so you should not do anything. This consumer is a Spring DefaultMessageListenerContainer which listen for replies. However it's fixed to 1 concurrent consumer. That means replies will be processed in sequence as there are only 1 thread to process the replies. You can configure the listener to use concurrent threads using the concurrentConsumers and maxConcurrentConsumers options. This allows you to easier configure this in Camel as shown below: from(xxx) .inOut().to("activemq:queue:foo?concurrentConsumers=5") .to(yyy) .to(zzz); In this route we instruct Camel to route replies asynchronously using a thread pool with 5 threads. 31.13.1. Request-reply over JMS and using a shared fixed reply queue If you use a fixed reply queue when doing Request Reply over JMS as shown in the example below, then pay attention. from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar") .to(yyy) In this example the fixed reply queue named "bar" is used. By default Camel assumes the queue is shared when using fixed reply queues, and therefore it uses a JMSSelector to only pickup the expected reply messages (eg based on the JMSCorrelationID ). See section for exclusive fixed reply queues. That means its not as fast as temporary queues. You can speedup how often Camel will pull for reply messages using the receiveTimeout option. By default its 1000 millis. So to make it faster you can set it to 250 millis to pull 4 times per second as shown: from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar&receiveTimeout=250") .to(yyy) Notice this will cause the Camel to send pull requests to the message broker more frequent, and thus require more network traffic. It is generally recommended to use temporary queues if possible. 31.13.2. Request-reply over JMS and using an exclusive fixed reply queue In the example, Camel would anticipate the fixed reply queue named "bar" was shared, and thus it uses a JMSSelector to only consume reply messages which it expects. However there is a drawback doing this as the JMS selector is slower. Also the consumer on the reply queue is slower to update with new JMS selector ids. In fact it only updates when the receiveTimeout option times out, which by default is 1 second. So in theory the reply messages could take up till about 1 sec to be detected. On the other hand if the fixed reply queue is exclusive to the Camel reply consumer, then we can avoid using the JMS selectors, and thus be more performant. In fact as fast as using temporary queues. There is the ReplyToType option which you can configure to Exclusive to tell Camel that the reply queue is exclusive as shown in the example below: from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar&replyToType=Exclusive") .to(yyy) Mind that the queue must be exclusive to each and every endpoint. So if you have two routes, then they each need an unique reply queue as shown in the example: from(xxx) .inOut().to("activemq:queue:foo?replyTo=bar&replyToType=Exclusive") .to(yyy) from(aaa) .inOut().to("activemq:queue:order?replyTo=order.reply&replyToType=Exclusive") .to(bbb) The same applies if you run in a clustered environment. Then each node in the cluster must use an unique reply queue name. As otherwise each node in the cluster may pickup messages which was intended as a reply on another node. For clustered environments its recommended to use shared reply queues instead. 31.14. Synchronizing clocks between senders and receivers When doing messaging between systems, its desirable that the systems have synchronized clocks. For example when sending a JMS message, then you can set a time to live value on the message. Then the receiver can inspect this value, and determine if the message is already expired, and thus drop the message instead of consume and process it. However this requires that both sender and receiver have synchronized clocks. If you are using ActiveMQ then you can use the timestamp plugin to synchronize clocks. 31.15. About time to live Read first above about synchronized clocks. When you do request/reply (InOut) over JMS with Camel then Camel uses a timeout on the sender side, which is default 20 seconds from the requestTimeout option. You can control this by setting a higher/lower value. However the time to live value is still set on the message being send. So that requires the clocks to be synchronized between the systems. If they are not, then you may want to disable the time to live value being set. This is now possible using the disableTimeToLive option from Camel 2.8 onwards. So if you set this option to disableTimeToLive=true , then Camel does not set any time to live value when sending JMS messages. But the request timeout is still active. So for example if you do request/reply over JMS and have disabled time to live, then Camel will still use a timeout by 20 seconds (the requestTimeout option). That option can of course also be configured. So the two options requestTimeout and disableTimeToLive gives you fine grained control when doing request/reply. You can provide a header in the message to override and use as the request timeout value instead of the endpoint configured value. For example: from("direct:someWhere") .to("jms:queue:foo?replyTo=bar&requestTimeout=30s") .to("bean:processReply"); In the route above we have a endpoint configured requestTimeout of 30 seconds. So Camel will wait up till 30 seconds for that reply message to come back on the bar queue. If no reply message is received then a org.apache.camel.ExchangeTimedOutException is set on the Exchange and Camel continues routing the message, which would then fail due the exception, and Camel's error handler reacts. If you want to use a per message timeout value, you can set the header with key org.apache.camel.component.jms.JmsConstants#JMS_REQUEST_TIMEOUT which has constant value "CamelJmsRequestTimeout" with a timeout value as long type. For example we can use a bean to compute the timeout value per individual message, such as calling the "whatIsTheTimeout" method on the service bean as shown below: from("direct:someWhere") .setHeader("CamelJmsRequestTimeout", method(ServiceBean.class, "whatIsTheTimeout")) .to("jms:queue:foo?replyTo=bar&requestTimeout=30s") .to("bean:processReply"); When you do fire and forget (InOut) over JMS with Camel then Camel by default does not set any time to live value on the message. You can configure a value by using the timeToLive option. For example to indicate a 5 sec., you set timeToLive=5000 . The option disableTimeToLive can be used to force disabling the time to live, also for InOnly messaging. The requestTimeout option is not being used for InOnly messaging. 31.16. Enabling Transacted Consumption A common requirement is to consume from a queue in a transaction and then process the message using the Camel route. To do this, just ensure that you set the following properties on the component/endpoint: transacted = true transactionManager = a Transsaction Manager - typically the JmsTransactionManager See the Transactional Client EIP pattern for further details. Transactions and [Request Reply] over JMS When using Request Reply over JMS you cannot use a single transaction; JMS will not send any messages until a commit is performed, so the server side won't receive anything at all until the transaction commits. Therefore to use Request Reply you must commit a transaction after sending the request and then use a separate transaction for receiving the response. To address this issue the JMS component uses different properties to specify transaction use for oneway messaging and request reply messaging: The transacted property applies only to the InOnly message Exchange Pattern (MEP). You can leverage the DMLC transacted session API using the following properties on component/endpoint: transacted = true lazyCreateTransactionManager = false The benefit of doing so is that the cacheLevel setting will be honored when using local transactions without a configured TransactionManager. When a TransactionManager is configured, no caching happens at DMLC level and it is necessary to rely on a pooled connection factory. For more details about this kind of setup, see here and here . 31.17. Using JMSReplyTo for late replies When using Camel as a JMS listener, it sets an Exchange property with the value of the ReplyTo javax.jms.Destination object, having the key ReplyTo . You can obtain this Destination as follows: Destination replyDestination = exchange.getIn().getHeader(JmsConstants.JMS_REPLY_DESTINATION, Destination.class); And then later use it to send a reply using regular JMS or Camel. // we need to pass in the JMS component, and in this sample we use ActiveMQ JmsEndpoint endpoint = JmsEndpoint.newInstance(replyDestination, activeMQComponent); // now we have the endpoint we can use regular Camel API to send a message to it template.sendBody(endpoint, "Here is the late reply."); A different solution to sending a reply is to provide the replyDestination object in the same Exchange property when sending. Camel will then pick up this property and use it for the real destination. The endpoint URI must include a dummy destination, however. For example: // we pretend to send it to some non existing dummy queue template.send("activemq:queue:dummy, new Processor() { public void process(Exchange exchange) throws Exception { // and here we override the destination with the ReplyTo destination object so the message is sent to there instead of dummy exchange.getIn().setHeader(JmsConstants.JMS_DESTINATION, replyDestination); exchange.getIn().setBody("Here is the late reply."); } } 31.18. Using a request timeout In the sample below we send a Request Reply style message Exchange (we use the requestBody method = InOut ) to the slow queue for further processing in Camel and we wait for a return reply: 31.19. Sending an InOnly message and keeping the JMSReplyTo header When sending to a JMS destination using camel-jms the producer will use the MEP to detect if its InOnly or InOut messaging. However there can be times where you want to send an InOnly message but keeping the JMSReplyTo header. To do so you have to instruct Camel to keep it, otherwise the JMSReplyTo header will be dropped. For example to send an InOnly message to the foo queue, but with a JMSReplyTo with bar queue you can do as follows: template.send("activemq:queue:foo?preserveMessageQos=true", new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setBody("World"); exchange.getIn().setHeader("JMSReplyTo", "bar"); } }); Notice we use preserveMessageQos=true to instruct Camel to keep the JMSReplyTo header. 31.20. Setting JMS provider options on the destination Some JMS providers, like IBM's WebSphere MQ need options to be set on the JMS destination. For example, you may need to specify the targetClient option. Since targetClient is a WebSphere MQ option and not a Camel URI option, you need to set that on the JMS destination name like so: // ... .setHeader("CamelJmsDestinationName", constant("queue:///MY_QUEUE?targetClient=1")) .to("wmq:queue:MY_QUEUE?useMessageIDAsCorrelationID=true"); Some versions of WMQ won't accept this option on the destination name and you will get an exception like: A workaround is to use a custom DestinationResolver: JmsComponent wmq = new JmsComponent(connectionFactory); wmq.setDestinationResolver(new DestinationResolver() { public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException { MQQueueSession wmqSession = (MQQueueSession) session; return wmqSession.createQueue("queue:///" + destinationName + "?targetClient=1"); } }); 31.21. Spring Boot Auto-Configuration When using jms with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jms-starter</artifactId> </dependency> The component supports 99 options, which are listed below. Name Description Default Type camel.component.jms.accept-messages-while-stopping Specifies whether the consumer accept messages while it is stopping. You may consider enabling this option, if you start and stop JMS routes at runtime, while there are still messages enqueued on the queue. If this option is false, and you stop the JMS route, then messages may be rejected, and the JMS broker would have to attempt redeliveries, which yet again may be rejected, and eventually the message may be moved at a dead letter queue on the JMS broker. To avoid this its recommended to enable this option. false Boolean camel.component.jms.acknowledgement-mode-name The JMS acknowledgement name, which is one of: SESSION_TRANSACTED, CLIENT_ACKNOWLEDGE, AUTO_ACKNOWLEDGE, DUPS_OK_ACKNOWLEDGE. AUTO_ACKNOWLEDGE String camel.component.jms.allow-additional-headers This option is used to allow additional headers which may have values that are invalid according to JMS specification. For example some message systems such as WMQ do this with header names using prefix JMS_IBM_MQMD_ containing values with byte array or other invalid types. You can specify multiple header names separated by comma, and use as suffix for wildcard matching. String camel.component.jms.allow-auto-wired-connection-factory Whether to auto-discover ConnectionFactory from the registry, if no connection factory has been configured. If only one instance of ConnectionFactory is found then it will be used. This is enabled by default. true Boolean camel.component.jms.allow-auto-wired-destination-resolver Whether to auto-discover DestinationResolver from the registry, if no destination resolver has been configured. If only one instance of DestinationResolver is found then it will be used. This is enabled by default. true Boolean camel.component.jms.allow-null-body Whether to allow sending messages with no body. If this option is false and the message body is null, then an JMSException is thrown. true Boolean camel.component.jms.allow-reply-manager-quick-stop Whether the DefaultMessageListenerContainer used in the reply managers for request-reply messaging allow the DefaultMessageListenerContainer.runningAllowed flag to quick stop in case JmsConfiguration#isAcceptMessagesWhileStopping is enabled, and org.apache.camel.CamelContext is currently being stopped. This quick stop ability is enabled by default in the regular JMS consumers but to enable for reply managers you must enable this flag. false Boolean camel.component.jms.allow-serialized-headers Controls whether or not to include serialized headers. Applies only when transferExchange is true. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. false Boolean camel.component.jms.always-copy-message If true, Camel will always make a JMS message copy of the message when it is passed to the producer for sending. Copying the message is needed in some situations, such as when a replyToDestinationSelectorName is set (incidentally, Camel will set the alwaysCopyMessage option to true, if a replyToDestinationSelectorName is set). false Boolean camel.component.jms.artemis-consumer-priority Consumer priorities allow you to ensure that high priority consumers receive messages while they are active. Normally, active consumers connected to a queue receive messages from it in a round-robin fashion. When consumer priorities are in use, messages are delivered round-robin if multiple active consumers exist with the same high priority. Messages will only going to lower priority consumers when the high priority consumers do not have credit available to consume the message, or those high priority consumers have declined to accept the message (for instance because it does not meet the criteria of any selectors associated with the consumer). Integer camel.component.jms.artemis-streaming-enabled Whether optimizing for Apache Artemis streaming mode. This can reduce memory overhead when using Artemis with JMS StreamMessage types. This option must only be enabled if Apache Artemis is being used. false Boolean camel.component.jms.async-consumer Whether the JmsConsumer processes the Exchange asynchronously. If enabled then the JmsConsumer may pickup the message from the JMS queue, while the message is being processed asynchronously (by the Asynchronous Routing Engine). This means that messages may be processed not 100% strictly in order. If disabled (as default) then the Exchange is fully processed before the JmsConsumer will pickup the message from the JMS queue. Note if transacted has been enabled, then asyncConsumer=true does not run asynchronously, as transaction must be executed synchronously (Camel 3.0 may support async transactions). false Boolean camel.component.jms.async-start-listener Whether to startup the JmsConsumer message listener asynchronously, when starting a route. For example if a JmsConsumer cannot get a connection to a remote JMS broker, then it may block while retrying and/or failover. This will cause Camel to block while starting routes. By setting this option to true, you will let routes startup, while the JmsConsumer connects to the JMS broker using a dedicated thread in asynchronous mode. If this option is used, then beware that if the connection could not be established, then an exception is logged at WARN level, and the consumer will not be able to receive messages; You can then restart the route to retry. false Boolean camel.component.jms.async-stop-listener Whether to stop the JmsConsumer message listener asynchronously, when stopping a route. false Boolean camel.component.jms.auto-startup Specifies whether the consumer container should auto-startup. true Boolean camel.component.jms.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jms.cache-level Sets the cache level by ID for the underlying JMS resources. See cacheLevelName option for more details. Integer camel.component.jms.cache-level-name Sets the cache level by name for the underlying JMS resources. Possible values are: CACHE_AUTO, CACHE_CONNECTION, CACHE_CONSUMER, CACHE_NONE, and CACHE_SESSION. The default setting is CACHE_AUTO. See the Spring documentation and Transactions Cache Levels for more information. CACHE_AUTO String camel.component.jms.client-id Sets the JMS client ID to use. Note that this value, if specified, must be unique and can only be used by a single JMS connection instance. It is typically only required for durable topic subscriptions. If using Apache ActiveMQ you may prefer to use Virtual Topics instead. String camel.component.jms.concurrent-consumers Specifies the default number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. 1 Integer camel.component.jms.configuration To use a shared JMS configuration. The option is a org.apache.camel.component.jms.JmsConfiguration type. JmsConfiguration camel.component.jms.connection-factory The connection factory to be use. A connection factory must be configured either on the component or endpoint. The option is a javax.jms.ConnectionFactory type. ConnectionFactory camel.component.jms.consumer-type The consumer type to use, which can be one of: Simple, Default, or Custom. The consumer type determines which Spring JMS listener to use. Default will use org.springframework.jms.listener.DefaultMessageListenerContainer, Simple will use org.springframework.jms.listener.SimpleMessageListenerContainer. When Custom is specified, the MessageListenerContainerFactory defined by the messageListenerContainerFactory option will determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use. ConsumerType camel.component.jms.correlation-property When using InOut exchange pattern use this JMS property instead of JMSCorrelationID JMS property to correlate messages. If set messages will be correlated solely on the value of this property JMSCorrelationID property will be ignored and not set by Camel. String camel.component.jms.default-task-executor-type Specifies what default TaskExecutor type to use in the DefaultMessageListenerContainer, for both consumer endpoints and the ReplyTo consumer of producer endpoints. Possible values: SimpleAsync (uses Spring's SimpleAsyncTaskExecutor) or ThreadPool (uses Spring's ThreadPoolTaskExecutor with optimal values - cached threadpool-like). If not set, it defaults to the behaviour, which uses a cached thread pool for consumer endpoints and SimpleAsync for reply consumers. The use of ThreadPool is recommended to reduce thread trash in elastic configurations with dynamically increasing and decreasing concurrent consumers. DefaultTaskExecutorType camel.component.jms.delivery-delay Sets delivery delay to use for send calls for JMS. This option requires JMS 2.0 compliant broker. -1 Long camel.component.jms.delivery-mode Specifies the delivery mode to be used. Possible values are those defined by javax.jms.DeliveryMode. NON_PERSISTENT = 1 and PERSISTENT = 2. Integer camel.component.jms.delivery-persistent Specifies whether persistent delivery is used by default. true Boolean camel.component.jms.destination-resolver A pluggable org.springframework.jms.support.destination.DestinationResolver that allows you to use your own resolver (for example, to lookup the real destination in a JNDI registry). The option is a org.springframework.jms.support.destination.DestinationResolver type. DestinationResolver camel.component.jms.disable-reply-to Specifies whether Camel ignores the JMSReplyTo header in messages. If true, Camel does not send a reply back to the destination specified in the JMSReplyTo header. You can use this option if you want Camel to consume from a route and you do not want Camel to automatically send back a reply message because another component in your code handles the reply message. You can also use this option if you want to use Camel as a proxy between different message brokers and you want to route message from one system to another. false Boolean camel.component.jms.disable-time-to-live Use this option to force disabling time to live. For example when you do request/reply over JMS, then Camel will by default use the requestTimeout value as time to live on the message being sent. The problem is that the sender and receiver systems have to have their clocks synchronized, so they are in sync. This is not always so easy to archive. So you can use disableTimeToLive=true to not set a time to live value on the sent message. Then the message will not expire on the receiver system. See below in section About time to live for more details. false Boolean camel.component.jms.durable-subscription-name The durable subscriber name for specifying durable topic subscriptions. The clientId option must be configured as well. String camel.component.jms.eager-loading-of-properties Enables eager loading of JMS properties and payload as soon as a message is loaded which generally is inefficient as the JMS properties may not be required but sometimes can catch early any issues with the underlying JMS provider and the use of JMS properties. See also the option eagerPoisonBody. false Boolean camel.component.jms.eager-poison-body If eagerLoadingOfProperties is enabled and the JMS message payload (JMS body or JMS properties) is poison (cannot be read/mapped), then set this text as the message body instead so the message can be processed (the cause of the poison are already stored as exception on the Exchange). This can be turned off by setting eagerPoisonBody=false. See also the option eagerLoadingOfProperties. Poison JMS message due to USD\{exception.message} String camel.component.jms.enabled Whether to enable auto configuration of the jms component. This is enabled by default. Boolean camel.component.jms.error-handler Specifies a org.springframework.util.ErrorHandler to be invoked in case of any uncaught exceptions thrown while processing a Message. By default these exceptions will be logged at the WARN level, if no errorHandler has been configured. You can configure logging level and whether stack traces should be logged using errorHandlerLoggingLevel and errorHandlerLogStackTrace options. This makes it much easier to configure, than having to code a custom errorHandler. The option is a org.springframework.util.ErrorHandler type. ErrorHandler camel.component.jms.error-handler-log-stack-trace Allows to control whether stacktraces should be logged or not, by the default errorHandler. true Boolean camel.component.jms.error-handler-logging-level Allows to configure the default errorHandler logging level for logging uncaught exceptions. LoggingLevel camel.component.jms.exception-listener Specifies the JMS Exception Listener that is to be notified of any underlying JMS exceptions. The option is a javax.jms.ExceptionListener type. ExceptionListener camel.component.jms.explicit-qos-enabled Set if the deliveryMode, priority or timeToLive qualities of service should be used when sending messages. This option is based on Spring's JmsTemplate. The deliveryMode, priority and timeToLive options are applied to the current endpoint. This contrasts with the preserveMessageQos option, which operates at message granularity, reading QoS properties exclusively from the Camel In message headers. false Boolean camel.component.jms.expose-listener-session Specifies whether the listener session should be exposed when consuming messages. false Boolean camel.component.jms.force-send-original-message When using mapJmsMessage=false Camel will create a new JMS message to send to a new JMS destination if you touch the headers (get or set) during the route. Set this option to true to force Camel to send the original JMS message that was received. false Boolean camel.component.jms.format-date-headers-to-iso8601 Sets whether JMS date properties should be formatted according to the ISO 8601 standard. false Boolean camel.component.jms.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. HeaderFilterStrategy camel.component.jms.idle-consumer-limit Specify the limit for the number of consumers that are allowed to be idle at any given time. 1 Integer camel.component.jms.idle-task-execution-limit Specifies the limit for idle executions of a receive task, not having received any message within its execution. If this limit is reached, the task will shut down and leave receiving to other executing tasks (in the case of dynamic scheduling; see the maxConcurrentConsumers setting). There is additional doc available from Spring. 1 Integer camel.component.jms.include-all-j-m-s-x-properties Whether to include all JMSXxxx properties when mapping from JMS to Camel Message. Setting this to true will include properties such as JMSXAppID, and JMSXUserID etc. Note: If you are using a custom headerFilterStrategy then this option does not apply. false Boolean camel.component.jms.include-sent-j-m-s-message-i-d Only applicable when sending to JMS destination using InOnly (eg fire and forget). Enabling this option will enrich the Camel Exchange with the actual JMSMessageID that was used by the JMS client when the message was sent to the JMS destination. false Boolean camel.component.jms.jms-key-format-strategy Pluggable strategy for encoding and decoding JMS keys so they can be compliant with the JMS specification. Camel provides two implementations out of the box: default and passthrough. The default strategy will safely marshal dots and hyphens (. and -). The passthrough strategy leaves the key as is. Can be used for JMS brokers which do not care whether JMS header keys contain illegal characters. You can provide your own implementation of the org.apache.camel.component.jms.JmsKeyFormatStrategy and refer to it using the # notation. JmsKeyFormatStrategy camel.component.jms.jms-message-type Allows you to force the use of a specific javax.jms.Message implementation for sending JMS messages. Possible values are: Bytes, Map, Object, Stream, Text. By default, Camel would determine which JMS message type to use from the In body type. This option allows you to specify it. JmsMessageType camel.component.jms.lazy-create-transaction-manager If true, Camel will create a JmsTransactionManager, if there is no transactionManager injected when option transacted=true. true Boolean camel.component.jms.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jms.map-jms-message Specifies whether Camel should auto map the received JMS message to a suited payload type, such as javax.jms.TextMessage to a String etc. true Boolean camel.component.jms.max-concurrent-consumers Specifies the maximum number of concurrent consumers when consuming from JMS (not for request/reply over JMS). See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. When doing request/reply over JMS then the option replyToMaxConcurrentConsumers is used to control number of concurrent consumers on the reply message listener. Integer camel.component.jms.max-messages-per-task The number of messages per task. -1 is unlimited. If you use a range for concurrent consumers (eg min max), then this option can be used to set a value to eg 100 to control how fast the consumers will shrink when less work is required. -1 Integer camel.component.jms.message-converter To use a custom Spring org.springframework.jms.support.converter.MessageConverter so you can be in control how to map to/from a javax.jms.Message. The option is a org.springframework.jms.support.converter.MessageConverter type. MessageConverter camel.component.jms.message-created-strategy To use the given MessageCreatedStrategy which are invoked when Camel creates new instances of javax.jms.Message objects when Camel is sending a JMS message. The option is a org.apache.camel.component.jms.MessageCreatedStrategy type. MessageCreatedStrategy camel.component.jms.message-id-enabled When sending, specifies whether message IDs should be added. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the message ID set to null; if the provider ignores the hint, the message ID must be set to its normal unique value. true Boolean camel.component.jms.message-listener-container-factory Registry ID of the MessageListenerContainerFactory used to determine what org.springframework.jms.listener.AbstractMessageListenerContainer to use to consume messages. Setting this will automatically set consumerType to Custom. The option is a org.apache.camel.component.jms.MessageListenerContainerFactory type. MessageListenerContainerFactory camel.component.jms.message-timestamp-enabled Specifies whether timestamps should be enabled by default on sending messages. This is just an hint to the JMS broker. If the JMS provider accepts this hint, these messages must have the timestamp set to zero; if the provider ignores the hint the timestamp must be set to its normal value. true Boolean camel.component.jms.password Password to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.jms.preserve-message-qos Set to true, if you want to send message using the QoS settings specified on the message, instead of the QoS settings on the JMS endpoint. The following three headers are considered JMSPriority, JMSDeliveryMode, and JMSExpiration. You can provide all or only some of them. If not provided, Camel will fall back to use the values from the endpoint instead. So, when using this option, the headers override the values from the endpoint. The explicitQosEnabled option, by contrast, will only use options set on the endpoint, and not values from the message header. false Boolean camel.component.jms.priority Values greater than 1 specify the message priority when sending (where 1 is the lowest priority and 9 is the highest). The explicitQosEnabled option must also be enabled in order for this option to have any effect. 4 Integer camel.component.jms.pub-sub-no-local Specifies whether to inhibit the delivery of messages published by its own connection. false Boolean camel.component.jms.queue-browse-strategy To use a custom QueueBrowseStrategy when browsing queues. The option is a org.apache.camel.component.jms.QueueBrowseStrategy type. QueueBrowseStrategy camel.component.jms.receive-timeout The timeout for receiving messages (in milliseconds). The option is a long type. 1000 Long camel.component.jms.recovery-interval Specifies the interval between recovery attempts, i.e. when a connection is being refreshed, in milliseconds. The default is 5000 ms, that is, 5 seconds. The option is a long type. 5000 Long camel.component.jms.reply-to Provides an explicit ReplyTo destination (overrides any incoming value of Message.getJMSReplyTo() in consumer). String camel.component.jms.reply-to-cache-level-name Sets the cache level by name for the reply consumer when doing request/reply over JMS. This option only applies when using fixed reply queues (not temporary). Camel will by default use: CACHE_CONSUMER for exclusive or shared w/ replyToSelectorName. And CACHE_SESSION for shared without replyToSelectorName. Some JMS brokers such as IBM WebSphere may require to set the replyToCacheLevelName=CACHE_NONE to work. Note: If using temporary queues then CACHE_NONE is not allowed, and you must use a higher value such as CACHE_CONSUMER or CACHE_SESSION. String camel.component.jms.reply-to-concurrent-consumers Specifies the default number of concurrent consumers when doing request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. 1 Integer camel.component.jms.reply-to-delivery-persistent Specifies whether to use persistent delivery by default for replies. true Boolean camel.component.jms.reply-to-destination-selector-name Sets the JMS Selector using the fixed name to be used so you can filter out your own replies from the others when using a shared queue (that is, if you are not using a temporary reply queue). String camel.component.jms.reply-to-max-concurrent-consumers Specifies the maximum number of concurrent consumers when using request/reply over JMS. See also the maxMessagesPerTask option to control dynamic scaling up/down of threads. Integer camel.component.jms.reply-to-on-timeout-max-concurrent-consumers Specifies the maximum number of concurrent consumers for continue routing when timeout occurred when using request/reply over JMS. 1 Integer camel.component.jms.reply-to-override Provides an explicit ReplyTo destination in the JMS message, which overrides the setting of replyTo. It is useful if you want to forward the message to a remote Queue and receive the reply message from the ReplyTo destination. String camel.component.jms.reply-to-same-destination-allowed Whether a JMS consumer is allowed to send a reply message to the same destination that the consumer is using to consume from. This prevents an endless loop by consuming and sending back the same message to itself. false Boolean camel.component.jms.reply-to-type Allows for explicitly specifying which kind of strategy to use for replyTo queues when doing request/reply over JMS. Possible values are: Temporary, Shared, or Exclusive. By default Camel will use temporary queues. However if replyTo has been configured, then Shared is used by default. This option allows you to use exclusive queues instead of shared ones. See Camel JMS documentation for more details, and especially the notes about the implications if running in a clustered environment, and the fact that Shared reply queues has lower performance than its alternatives Temporary and Exclusive. ReplyToType camel.component.jms.request-timeout The timeout for waiting for a reply when using the InOut Exchange Pattern (in milliseconds). The default is 20 seconds. You can include the header CamelJmsRequestTimeout to override this endpoint configured timeout value, and thus have per message individual timeout values. See also the requestTimeoutCheckerInterval option. The option is a long type. 20000 Long camel.component.jms.request-timeout-checker-interval Configures how often Camel should check for timed out Exchanges when doing request/reply over JMS. By default Camel checks once per second. But if you must react faster when a timeout occurs, then you can lower this interval, to check more frequently. The timeout is determined by the option requestTimeout. The option is a long type. 1000 Long camel.component.jms.selector Sets the JMS selector to use. String camel.component.jms.stream-message-type-enabled Sets whether StreamMessage type is enabled or not. Message payloads of streaming kind such as files, InputStream, etc will either by sent as BytesMessage or StreamMessage. This option controls which kind will be used. By default BytesMessage is used which enforces the entire message payload to be read into memory. By enabling this option the message payload is read into memory in chunks and each chunk is then written to the StreamMessage until no more data. false Boolean camel.component.jms.subscription-durable Set whether to make the subscription durable. The durable subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a durable subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. false Boolean camel.component.jms.subscription-name Set the name of a subscription to create. To be applied in case of a topic (pub-sub domain) with a shared or durable subscription. The subscription name needs to be unique within this client's JMS client id. Default is the class name of the specified message listener. Note: Only 1 concurrent consumer (which is the default of this message listener container) is allowed for each subscription, except for a shared subscription (which requires JMS 2.0). String camel.component.jms.subscription-shared Set whether to make the subscription shared. The shared subscription name to be used can be specified through the subscriptionName property. Default is false. Set this to true to register a shared subscription, typically in combination with a subscriptionName value (unless your message listener class name is good enough as subscription name). Note that shared subscriptions may also be durable, so this flag can (and often will) be combined with subscriptionDurable as well. Only makes sense when listening to a topic (pub-sub domain), therefore this method switches the pubSubDomain flag as well. Requires a JMS 2.0 compatible message broker. false Boolean camel.component.jms.synchronous Sets whether synchronous processing should be strictly used. false Boolean camel.component.jms.task-executor Allows you to specify a custom task executor for consuming messages. The option is a org.springframework.core.task.TaskExecutor type. TaskExecutor camel.component.jms.test-connection-on-startup Specifies whether to test the connection on startup. This ensures that when Camel starts that all the JMS consumers have a valid connection to the JMS broker. If a connection cannot be granted then Camel throws an exception on startup. This ensures that Camel is not started with failed connections. The JMS producers is tested as well. false Boolean camel.component.jms.time-to-live When sending messages, specifies the time-to-live of the message (in milliseconds). -1 Long camel.component.jms.transacted Specifies whether to use transacted mode. false Boolean camel.component.jms.transacted-in-out Specifies whether InOut operations (request reply) default to using transacted mode If this flag is set to true, then Spring JmsTemplate will have sessionTransacted set to true, and the acknowledgeMode as transacted on the JmsTemplate used for InOut operations. Note from Spring JMS: that within a JTA transaction, the parameters passed to createQueue, createTopic methods are not taken into account. Depending on the Java EE transaction context, the container makes its own decisions on these values. Analogously, these parameters are not taken into account within a locally managed transaction either, since Spring JMS operates on an existing JMS Session in this case. Setting this flag to true will use a short local JMS transaction when running outside of a managed transaction, and a synchronized local JMS transaction in case of a managed transaction (other than an XA transaction) being present. This has the effect of a local JMS transaction being managed alongside the main transaction (which might be a native JDBC transaction), with the JMS transaction committing right after the main transaction. false Boolean camel.component.jms.transaction-manager The Spring transaction manager to use. The option is a org.springframework.transaction.PlatformTransactionManager type. PlatformTransactionManager camel.component.jms.transaction-name The name of the transaction to use. String camel.component.jms.transaction-timeout The timeout value of the transaction (in seconds), if using transacted mode. -1 Integer camel.component.jms.transfer-exception If enabled and you are using Request Reply messaging (InOut) and an Exchange failed on the consumer side, then the caused Exception will be send back in response as a javax.jms.ObjectMessage. If the client is Camel, the returned Exception is rethrown. This allows you to use Camel JMS as a bridge in your routing - for example, using persistent queues to enable robust routing. Notice that if you also have transferExchange enabled, this option takes precedence. The caught exception is required to be serializable. The original Exception on the consumer side can be wrapped in an outer exception such as org.apache.camel.RuntimeCamelException when returned to the producer. Use this with caution as the data is using Java Object serialization and requires the received to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumer!. false Boolean camel.component.jms.transfer-exchange You can transfer the exchange over the wire instead of just the body and headers. The following fields are transferred: In body, Out body, Fault body, In headers, Out headers, Fault headers, exchange properties, exchange exception. This requires that the objects are serializable. Camel will exclude any non-serializable objects and log it at WARN level. You must enable this option on both the producer and consumer side, so Camel knows the payloads is an Exchange and not a regular payload. Use this with caution as the data is using Java Object serialization and requires the receiver to be able to deserialize the data at Class level, which forces a strong coupling between the producers and consumers having to use compatible Camel versions!. false Boolean camel.component.jms.use-message-i-d-as-correlation-i-d Specifies whether JMSMessageID should always be used as JMSCorrelationID for InOut messages. false Boolean camel.component.jms.username Username to use with the ConnectionFactory. You can also configure username/password directly on the ConnectionFactory. String camel.component.jms.wait-for-provision-correlation-to-be-updated-counter Number of times to wait for provisional correlation id to be updated to the actual correlation id when doing request/reply over JMS and when the option useMessageIDAsCorrelationID is enabled. 50 Integer camel.component.jms.wait-for-provision-correlation-to-be-updated-thread-sleeping-time Interval in millis to sleep each time while waiting for provisional correlation id to be updated. The option is a long type. 100 Long
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jms</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>", "jms:[queue:|topic:]destinationName[?options]", "jms:FOO.BAR", "jms:queue:FOO.BAR", "jms:topic:Stocks.Prices", "jms:destinationType:destinationName", "from(\"jms:queue:foo\"). to(\"bean:myBusinessLogic\");", "from(\"jms:topic:OrdersTopic\"). filter().method(\"myBean\", \"isGoldCustomer\"). to(\"jms:queue:BigSpendersQueue\");", "from(\"file://orders\"). convertBodyTo(String.class). to(\"jms:topic:OrdersTopic\");", "<route> <from uri=\"jms:topic:OrdersTopic\"/> <filter> <method ref=\"myBean\" method=\"isGoldCustomer\"/> <to uri=\"jms:queue:BigSpendersQueue\"/> </filter> </route>", "// setup error handler to use JMS as queue and store the entire Exchange errorHandler(deadLetterChannel(\"jms:queue:dead?transferExchange=true\"));", "from(\"jms:queue:dead\").to(\"bean:myErrorAnalyzer\"); // and in our bean String body = exchange.getIn().getBody(); Exception cause = exchange.getProperty(Exchange.EXCEPTION_CAUGHT, Exception.class); // the cause message is String problem = cause.getMessage();", "// we sent it to a seda dead queue first errorHandler(deadLetterChannel(\"seda:dead\")); // and on the seda dead queue we can do the custom transformation before its sent to the JMS queue from(\"seda:dead\").transform(exceptionMessage()).to(\"jms:queue:dead\");", "from(\"file://inbox/order\").to(\"jms:queue:order?messageConverter=#myMessageConverter\");", "from(\"file://inbox/order\").to(\"jms:queue:order?jmsMessageType=Text\");", "from(\"file://inbox/order\").setHeader(\"CamelJmsMessageType\", JmsMessageType.Text).to(\"jms:queue:order\");", "2008-07-09 06:43:04,046 [main ] DEBUG JmsBinding - Ignoring non primitive header: order of class: org.apache.camel.component.jms.issues.DummyOrder with value: DummyOrder{orderId=333, itemId=4444, quantity=2}", "from(\"activemq:queue:in\") .to(\"bean:validateOrder\") .to(ExchangePattern.InOnly, \"activemq:topic:order\") .to(\"bean:handleOrder\");", "from(\"file://inbox\") .to(\"bean:computeDestination\") .to(\"activemq:queue:dummy\");", "public void setJmsHeader(Exchange exchange) { String id = . exchange.getIn().setHeader(\"CamelJmsDestinationName\", \"order:\" + id\"); }", "<bean id=\"weblogic\" class=\"org.apache.camel.component.jms.JmsComponent\"> <property name=\"connectionFactory\" ref=\"myConnectionFactory\"/> </bean> <jee:jndi-lookup id=\"myConnectionFactory\" jndi-name=\"jms/connectionFactory\"/>", "from(\"jms:SomeQueue?concurrentConsumers=20\"). bean(MyClass.class);", "from(\"jms:SomeQueue?concurrentConsumers=20&asyncConsumer=true\"). bean(MyClass.class);", "from(xxx) .inOut().to(\"activemq:queue:foo?concurrentConsumers=5\") .to(yyy) .to(zzz);", "from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar\") .to(yyy)", "from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar&receiveTimeout=250\") .to(yyy)", "from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar&replyToType=Exclusive\") .to(yyy)", "from(xxx) .inOut().to(\"activemq:queue:foo?replyTo=bar&replyToType=Exclusive\") .to(yyy) from(aaa) .inOut().to(\"activemq:queue:order?replyTo=order.reply&replyToType=Exclusive\") .to(bbb)", "from(\"direct:someWhere\") .to(\"jms:queue:foo?replyTo=bar&requestTimeout=30s\") .to(\"bean:processReply\");", "from(\"direct:someWhere\") .setHeader(\"CamelJmsRequestTimeout\", method(ServiceBean.class, \"whatIsTheTimeout\")) .to(\"jms:queue:foo?replyTo=bar&requestTimeout=30s\") .to(\"bean:processReply\");", "Destination replyDestination = exchange.getIn().getHeader(JmsConstants.JMS_REPLY_DESTINATION, Destination.class);", "// we need to pass in the JMS component, and in this sample we use ActiveMQ JmsEndpoint endpoint = JmsEndpoint.newInstance(replyDestination, activeMQComponent); // now we have the endpoint we can use regular Camel API to send a message to it template.sendBody(endpoint, \"Here is the late reply.\");", "// we pretend to send it to some non existing dummy queue template.send(\"activemq:queue:dummy, new Processor() { public void process(Exchange exchange) throws Exception { // and here we override the destination with the ReplyTo destination object so the message is sent to there instead of dummy exchange.getIn().setHeader(JmsConstants.JMS_DESTINATION, replyDestination); exchange.getIn().setBody(\"Here is the late reply.\"); } }", "template.send(\"activemq:queue:foo?preserveMessageQos=true\", new Processor() { public void process(Exchange exchange) throws Exception { exchange.getIn().setBody(\"World\"); exchange.getIn().setHeader(\"JMSReplyTo\", \"bar\"); } });", "// .setHeader(\"CamelJmsDestinationName\", constant(\"queue:///MY_QUEUE?targetClient=1\")) .to(\"wmq:queue:MY_QUEUE?useMessageIDAsCorrelationID=true\");", "com.ibm.msg.client.jms.DetailedJMSException: JMSCC0005: The specified value 'MY_QUEUE?targetClient=1' is not allowed for 'XMSC_DESTINATION_NAME'", "JmsComponent wmq = new JmsComponent(connectionFactory); wmq.setDestinationResolver(new DestinationResolver() { public Destination resolveDestinationName(Session session, String destinationName, boolean pubSubDomain) throws JMSException { MQQueueSession wmqSession = (MQQueueSession) session; return wmqSession.createQueue(\"queue:///\" + destinationName + \"?targetClient=1\"); } });", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jms-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-jms-component-starter
Chapter 10. Pushing a container to a registry and embedding it into an image
Chapter 10. Pushing a container to a registry and embedding it into an image With RHEL image builder, you can build security hardened images using the OpenSCAP tool. You can take advantage of the support for container customization in the blueprints to create a container and embed it directly into the image you create. 10.1. Blueprint customization to embed a container into an image To embed a container from registry.access.redhat.com registry, you must add a container customization to your blueprint. For example: source - Mandatory field. It is a reference to the container image at a registry. This example uses the registry.access.redhat.com registry. You can specify a tag version. The default tag version is latest . name - The name of the container in the local registry. tls-verify - Boolean field. The tls-verify boolean field controls the transport layer security. The default value is true . RHEL image builder pulls the container during the image build and stores the container into the image. The default local container storage location depends on the image type, so that all support container-tools , such as Podman, are able to work with it. The embedded containers are not started. To access protected container resources, you can use a containers-auth.json file. 10.2. The Container registry credentials The [email protected] is a template service that can start multiple service instances. By default, the osbuild-composer service always starts with only one local osbuild-worker , specifically [email protected] . The osbuild-worker service is responsible for the communication with the container registry. To enable the service, set up the /etc/osbuild-worker/osbuild-worker.toml configuration file. Note After setting the /etc/osbuild-worker/osbuild-worker.toml configuration file, you must restart the osbuild-worker service, because it reads the /etc/osbuild-worker/osbuild-worker.toml configuration file only once, during the osbuild-worker service start. To stop the service instance, restart the systemd service with the following command: With that, you restart all the started instances of osbuild-worker , specifically [email protected] , the only service that might be running. The /etc/osbuild-worker/osbuild-worker.toml configuration file has a containers section with an auth_field_path entry that is a string referring to a path of a containers-auth.json file to be used for accessing protected resources. The container registry credentials are only used to pull a container image from a registry, when embedding the container into the image. For example: Additional resources The containers-auth.json man page on your system 10.3. Pushing a container artifact directly to a container registry You can push container artifacts, such as RHEL for Edge container images directly, directly to a container registry after you build it, by using the RHEL image builder CLI. Prerequisites Access to quay.io registry . This example uses the quay.io container registry as a target registry, but you can use a container registry of your choice. Procedure Set up a registry-config.toml file to select the container provider. The credentials are optional. Create a blueprint in the .toml format. This is a blueprint for the container in which you install an nginx package into the blueprint. Push the blueprint: Build the container image, by passing the registry and the repository to the composer-cli tool as arguments. simple-container - is the blueprint name. container - is the image type. "quay.io:8080/osbuild/ repository " - quay.io is the target registry, osbuild is the organization and repository is the location to push the container when it finishes building. Optionally, you can set a tag . If you do not set a value for :tag , it uses :latest tag by default. Note Building the container image takes time because of resolving dependencies of the customized packages. After the image build finishes, the container you created is available in quay.io . Verification Open quay.io . and click Repository Tags . Copy the manifest ID value to build the image in which you want to embed a container. Additional resources Quay.io - Working with tags 10.4. Building an image and pulling the container into the image After you have created the container image, you can build your customized image and pull the container image into it. For that, you must specify a container customization in the blueprint, and the container name for the final image. During the build process, the container image is fetched and placed in the local Podman container storage. Prerequisites You created a container image and pushed it into your local quay.io container registry instance. See Pushing a container artifact directly to a container registry . You have access to registry.access.redhat.com . You have a container manifest ID . You have the qemu-kvm and qemu-img packages installed. Procedure Create a blueprint to build a qcow2 image. The blueprint must contain the " " customization. Push the blueprint: Build the container image: image is the blueprint name. qcow2 is the image type. Note Building the image takes time because it checks the container on quay.io registry. To check the status of the compose: A finished compose shows the FINISHED status value. To identify your compose in the list, use its UUID. After the compose process is finished, download the resulting image file to your default download location: Replace UUID with the UUID value shown in the steps. You can use the qcow2 image you created and downloaded to create a VM. Verification From the resulting qcow2 image that you downloaded, perform the following steps: Start the qcow2 image in a VM. See Creating a virtual machine from a KVM guest image . The qemu wizard opens. Login in to the qcow2 image. Enter the username and password. These can be the username and password you set up in the .qcow2 blueprint in the "customizations.user" section, or created at boot time with cloud-init . Run the container image and open a shell prompt inside the container: registry.access.redhat.com is the target registry, osbuild is the organization and repository is the location to push the container when it finishes building. Check that the packages you added to the blueprint are available: The output shows you the nginx package path. Additional resources Red Hat Container Registry Authentication Accessing and Configuring the Red Hat Registry Basic Podman commands Running Skopeo in a container
[ "[[containers]] source = \"registry.access.redhat.com/ubi9/ubi:latest\" name = \"local-name\" tls-verify = true", "systemctl restart osbuild-worker@*", "[containers] auth_file_path = \"/etc/osbuild-worker/containers-auth.json\"", "provider = \" container_provider \" [settings] tls_verify = false username = \" admin \" password = \" your_password \"", "name = \"simple-container\" description = \"Simple RHEL container\" version = \"0.0.1\" [[packages]] name = \"nginx\" version = \"*\"", "composer-cli blueprints push blueprint.toml", "composer-cli compose start simple-container container \"quay.io:8080/osbuild/ repository \" registry-config.toml", "You can see details about the container you created, such as: - last modified - image size - the `manifest ID`, that you can copy to the clipboard.", "name = \"image\" description = \"A qcow2 image with a container\" version = \"0.0.1\" distro = \"rhel-90\" [[packages]] name = \"podman\" version = \"*\" [[containers]] source = \"registry.access.redhat.com/ubi9:8080/osbuild/container/container-image@sha256:manifest-ID-from-Repository-tag: tag-version\" name = \"source-name\" tls-verify = true", "composer-cli blueprints push blueprint-image .toml", "composer-cli start compose image qcow2", "composer-cli compose status", "composer-cli compose image UUID", "podman run -it registry.access.redhat.com/ubi9:8080/osbuild/ repository /bin/bash/", "type -a nginx" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/composing_a_customized_rhel_system_image/assembly_pushing-a-container-to-a-register-and-embedding-it-into-a-image_composing-a-customized-rhel-system-image
8.4. Using Views
8.4. Using Views Virtual directory tree views, or views , create a virtual directory hierarchy, so it is easy to navigate entries, without having to make sure those entries physically exist in any particular place. The view uses information about the entries to place them in the view hierarchy, similarly to members of a filtered role or a dynamic group. Views superimpose a DIT hierarchy over a set of entries, and to client applications, views appear as ordinary container hierarchies. 8.4.1. About Views Views create a directory tree similar to the regular hierarchy, such as using organizational unit entries for subtrees, but views entries have an additional object class ( nsview ) and a filter attribute ( nsviewfilter ) that set up a filter for the entries which belong in that view. Once the view container entry is added, all of the entries that match the view filter instantly populate the view. The target entries only appear to exist in the view; their true location never changes. For example, a view may be created as ou=Location Views , and a filter is set for l=Mountain View . Every entry, such as cn=Jane Smith,l=Mountain View,ou=People,dc=example,dc=com , is immediately listed under the ou=Location Views entry, but the real cn=Jane Smith entry remains in the ou=People,dc=example,dc=com subtree. Figure 8.4. A Directory Tree with a Virtual DIT View hierarchy Virtual DIT views behave like normal DITs in that a subtree or a one-level search can be performed with the expected results being returned. Note There is a sample LDIF file with example views entries, Example-views.ldif , installed with Directory Server. This file is in the /usr/share/dirsrv/data/ directory. The sections in this chapter assume Example-views.ldif is imported to the server. The Red Hat Directory Server Deployment Guide has more information on how to integrate views with the directory tree hierarchy. 8.4.2. Creating Views from the Command Line Use the ldapmodify utility to bind to the server and prepare it to add the new view entry to the configuration file. Assuming the view container ou=Location Views,dc=example,dc=com from the Example-views.ldif file is in the Directory Server, add the new views container entry, in this example, under the dc=example,dc=com root suffix. This entry must have the nsview object class and the nsViewFilter attribute. The nsViewFilter attribute sets the attribute-value which identifies entries that belong in the view. 8.4.3. Improving Views Performance As Section 8.4.1, "About Views" describes, views are derived from search results based on a given filter. Part of the filter is the attribute defined in the nsViewFilter attribute; the rest of the filter is based on the entry hierarchy, looking for the entryid and parentid of the real entries included in the view. If any of the searched-for attributes - entryid , parentid , or the attribute set in nsViewFilter - are not indexed, then the views search becomes an unindexed search because the views operation searches the entire tree for matching entries. To improve views performance, create equality indexes for entryid , parentid , and the attribute set in nsViewFilter . Creating equality indexes is covered in Section 13.2, "Creating Standard Indexes" , and updating existing indexes to include new attributes is covered in Section 13.3, "Creating New Indexes to Existing Databases" .
[ "dn: ou=Mountain View,ou=Location Views,dc=example,dc=com changetype: add objectClass: top objectClass: organizationalUnit objectClass: nsview ou: Mountain View nsViewFilter: l=Mountain View description: views categorized by location", "(|(parentid= search_base_id )(entryid= search_base_id )" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/using-views
18.3. Linux RAID Subsystems
18.3. Linux RAID Subsystems RAID in Linux is composed of the following subsystems: Linux Hardware RAID Controller Drivers Hardware RAID controllers have no specific RAID subsystem in Linux. Because they use special RAID chipsets, hardware RAID controllers come with their own drivers; these drivers allow the system to detect the RAID sets as regular disks. mdraid The mdraid subsystem was designed as a software RAID solution for Linux; it is also the preferred solution for software RAID under Linux. This subsystem uses its own metadata format, generally referred to as native mdraid metadata. mdraid also supports other metadata formats, known as external metadata. Red Hat Enterprise Linux 7 uses mdraid with external metadata to access ISW / IMSM (Intel firmware RAID) sets. mdraid sets are configured and controlled through the mdadm utility. dmraid The dmraid tool is used on a wide variety of firmware RAID implementations. dmraid also supports Intel firmware RAID, although Red Hat Enterprise Linux 7 uses mdraid to access Intel firmware RAID sets. Note dmraid has been deprecated since the Red Hat Enterprise Linux 7.5 release. It will be removed in a future major release of Red Hat Enterprise Linux. For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.5 Release Notes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/raid-subsys
32.2.4. Password Aging
32.2.4. Password Aging For security reasons, it is advisable to require users to change their passwords periodically. This can be done when adding or editing a user on the Password Info tab of the User Manager . To configure password expiration for a user from a shell prompt, use the chage command, followed by an option from Table 32.3, " chage Command Line Options" , followed by the username of the user. Important Shadow passwords must be enabled to use the chage command. Table 32.3. chage Command Line Options Option Description -m <days> Specifies the minimum number of days between which the user must change passwords. If the value is 0, the password does not expire. -M <days> Specifies the maximum number of days for which the password is valid. When the number of days specified by this option plus the number of days specified with the -d option is less than the current day, the user must change passwords before using the account. -d <days> Specifies the number of days since January 1, 1970 the password was changed -I <days> Specifies the number of inactive days after the password expiration before locking the account. If the value is 0, the account is not locked after the password expires. -E <date> Specifies the date on which the account is locked, in the format YYYY-MM-DD. Instead of the date, the number of days since January 1, 1970 can also be used. -W <days> Specifies the number of days before the password expiration date to warn the user. Note If the chage command is followed directly by a username (with no options), it displays the current password aging values and allows them to be changed. You can configure a password to expire the first time a user logs in. This forces users to change passwords the first time they log in. Note This process will not work if the user logs in using the SSH protocol. Lock the user password - If the user does not exist, use the useradd command to create the user account, but do not give it a password so that it remains locked. If the password is already enabled, lock it with the command: Force immediate password expiration - Type the following command: This command sets the value for the date the password was last changed to the epoch (January 1, 1970). This value forces immediate password expiration no matter what password aging policy, if any, is in place. Unlock the account - There are two common approaches to this step. The administrator can assign an initial password or assign a null password. Warning Do not use the passwd command to set the password as it disables the immediate password expiration just configured. To assign an initial password, use the following steps: Start the command line Python interpreter with the python command. It displays the following: At the prompt, type the following commands. Replace <password> with the password to encrypt and <salt> with a random combination of at least 2 of the following: any alphanumeric character, the slash (/) character or a dot (.): The output is the encrypted password, similar to '12CsGd8FRcMSM' . Press Ctrl - D to exit the Python interpreter. At the shell, enter the following command (replacing <encrypted-password> with the encrypted output of the Python interpreter): Alternatively, you can assign a null password instead of an initial password. To do this, use the following command: Warning Using a null password, while convenient, is a highly unsecure practice, as any third party can log in first an access the system using the unsecure username. Always make sure that the user is ready to log in before unlocking an account with a null password. In either case, upon initial log in, the user is prompted for a new password.
[ "usermod -L username", "chage -d 0 username", "Python 2.4.3 (#1, Jul 21 2006, 08:46:09) [GCC 4.1.1 20060718 (Red Hat 4.1.1-9)] on linux2 Type \"help\", \"copyright\", \"credits\" or \"license\" for more information. >>>", "import crypt; print crypt.crypt(\" <password> \",\" <salt> \")", "usermod -p \" <encrypted-password> \" <username>", "usermod -p \"\" username" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s2-redhat-config-users-passwd-aging
Installing on OpenStack
Installing on OpenStack OpenShift Container Platform 4.15 Installing OpenShift Container Platform on OpenStack Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_openstack/index
Chapter 4. Virtualization
Chapter 4. Virtualization New Packages: hyperv-daemons New hyperv-daemons packages have been added to Red Hat Enterprise Linux 6.6. The new packages include the Hyper-V KVP daemon, previously provided by the hypervkvpd package, the Hyper-V VSS daemon, previously provided by the hypervvssd package, and the hv_fcopy daemon, previously provided by the hypervfcopyd package. The suite of daemons provided by hyperv-daemons are needed when a Linux guest is running on a Microsoft Windows host with Hyper-V .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_release_notes/bh-virtualization
Chapter 76. Creating test scenario using the sample Mortgages project
Chapter 76. Creating test scenario using the sample Mortgages project This chapter illustrates creating and executing a test scenario from the sample Mortgages project shipped with Business Central using the test scenario designer. The test scenario example in this chapter is based on the Pricing loans guided decision table from the Mortgages project. Procedure In Business Central, go to Menu Design Projects and click Mortgages . If the project is not listed under Projects , from MySpace , click Try Samples Mortgages OK . The Assets window appears. Click Add Asset Test Scenario . Enter scenario_pricing_loans as the Test Scenario name and select the default mortgages.mortgages package from the Package drop-down list. The package you select must contain all the required rule assets. Select RULE as the Source type . Click Ok to create and open the test scenario in the test scenario designer. Expand Project Explorer and verify the following: Applicant , Bankruptcy , IncomeSource , and LoanApplication data objects exist. Pricing loans guided decision table exists. Verify that the new test scenario is listed under Test Scenario After verifying that everything is in place, return to the Model tab of the test scenario designer and define the GIVEN and EXPECT data for the scenario, based on the available data objects. Figure 76.1. A blank test scenario designer Define the GIVEN column details: Click the cell named INSTANCE 1 under the GIVEN column header. From the Test Tools panel, select the LoanApplication data object. Click Insert Data Object . To create properties for the data object, right-click the property header cell and select Insert column right or Insert column left as required. For this example, you need to create two more property cells under the GIVEN column. Select the first property header cell: From the Test Tools panel, select and expand the LoanApplication data object. Click amount . Click Insert Data Object to map the data object field to the property header cell. Select the second property header cell: From the Test Tools panel, select and expand the LoanApplication data object. Click deposit . Click Insert Data Object . Select the third property header cell: From the Test Tools panel, select and expand the LoanApplication data object. Click lengthYears Click Insert Data Object . Right-click the LoanApplication header cell and select Insert column right . A new GIVEN column to the right is created. Select the new header cell: From the Test Tools panel, select the IncomeSource data object. Click Insert Data Object to map the data object to the header cell. Select the property header cell below IncomeSource : From the Test Tools panel, select and expand the IncomeSource data object. Click type . Click Insert Data Object to map the data object field to the property header cell. You have now defined all the GIVEN column cells. , define the EXPECT column details: Click the cell named INSTANCE 2 under the EXPECT column header. From the Test Tools panel, select LoanApplication data object. Click Insert Data Object . To create properties for the data object, right-click the property header cell and select Insert column right or Insert column left as required. Create two more property cells under the EXPECT column. Select the first property header cell: From the Test Tools panel, select and expand the LoanApplication data object. Click approved . Click Insert Data Object to map the data object field to the property header cell. Select the second property header cell: From the Test Tools panel, select and expand the LoanApplication data object. Click insuranceCost . Click Insert Data Object to map the data object field to the property header cell. Select the third property header cell: From the Test Tools panel, select and expand the LoanApplication data object. Click approvedRate . Click Insert Data Object to map the data object field to the property header cell. To define the test scenario, enter the following data in the first row: Enter Row 1 test scenario as the Scenario Description , 150000 as the amount , 19000 as the deposit , 30 as the lengthYears , and Asset as the type for the GIVEN column values. Enter true as approved , 0 as the insuranceCost and 2 as the approvedRate for the EXPECT column values. enter the following data in the second row: Enter Row 2 test scenario as the Scenario Description , 100002 as the amount , 2999 as the deposit , 20 as the lengthYears , and Job as the type for the GIVEN column values. Enter true as approved , 10 as the insuranceCost and 6 as the approvedRate for the EXPECT column values. After you have defined all GIVEN , EXPECT , and other data for the scenario, click Save in the test scenario designer to save your work. Click Run Test in the upper-right corner to run the .scesim file. The test result is displayed in the Test Report panel. Click View Alerts to display messages from the Alerts section. If a test fails, refer to the messages in the Alerts section at the bottom of the window, review and correct all components in the scenario, and try again to validate the scenario until the scenario passes. Click Save in the test scenario designer to save your work after you have made all necessary changes.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/test-designer-create-mortgages-example-proc
Creating and managing instances
Creating and managing instances Red Hat OpenStack Platform 17.1 Create and manage instances using the CLI OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_instances/index
Chapter 3. Considering Hardware
Chapter 3. Considering Hardware Considering hardware is an important part of building Ceph Storage clusters and Ceph Object Gateway clusters for production environments. High-level considerations include: Considering Storage Sizing Considering Storage Density Considering Uninterrupted Power Supplies Considering Network Hardware Selecting Hardware for Use Cases Selecting Media for Indexes Selecting Media for Monitor Nodes Important Consider these factors BEFORE identifying and purchasing computing and networking hardware for the cluster. 3.1. Considering Storage Sizing One of the most important factors in designing a cluster is to determine the storage requirements (sizing). Ceph Storage is designed to scale into petabytes and beyond. The following examples are common sizes for Ceph storage clusters. Small: 250 terabytes Medium: 1 petabyte Large: 2 petabytes or more. Sizing should include current needs and the needs of the near future. Consider the rate at which the gateway client will add new data to the cluster. That may differ from use-case to use-case. For example, recording CCTV video, 4k video or medical imaging may add significant amounts of data far more quickly then less storage intensive information such as financial market data. Additionally, consider that data durability methods such as replication versus erasure coding will have a significant impact on the storage media required. For additional information on sizing, see the Red Hat Ceph Storage Hardware Selection Guide and its associated links for selecting OSD hardware. 3.2. Considering Storage Density Another important aspect of cluster design includes storage density. Generally, a cluster should store data across at least 10 nodes to ensure reasonable performance when replicating, backfilling and recovery. If a node fails, with at least 10 nodes in the cluster, only 10% of the data has to move to the surviving nodes. If the number of nodes is substantially less, a higher percentage of the data must move to the surviving nodes. Additionally, the full_ratio and near_full_ratio need to be set to accommodate a node failure to ensure that the cluster can write data. For this reason, it is is important to consider storage density. Higher storage density isn't necessarily a good idea. Another factor that favors more nodes over higher storage density is erasure coding. When writing an object using erasure coding and using node as the minimum CRUSH failure domain, the cluster will need as many nodes as data and coding chunks. For example, a cluster using k=8, m=3 should have at least 11 nodes so that each data or coding chunk is stored on a separate node. Hot-swapping is also an important consideration. Most modern servers support drive hot-swapping. However, some hardware configurations require removing more than one drive to replace a drive. Red Hat recommends avoiding such configurations, because they can bring down more OSDs than required when swapping out failed disks. 3.3. Considering Network Hardware A major advantage of Ceph Storage is that it allows scaling capacity, IOPS and throughput independently. An important aspect of a cloud storage solution is that clusters can run out of IOPS due to network latency and other factors or run out of throughput due to bandwidth constraints long before the clusters run out of storage capacity. This means that the network hardware configuration must support the use case(s) in order to meet price/performance targets. Network performance is increasingly important when considering the use of SSDs, flash, NVMe, and other high performance storage methods. Another important consideration of Ceph Storage is that it supports a front side or public network for client and monitor data, and a back side or cluster network for heart beating, data replication and recovery. This means that the back side or cluster network will always require more network resources than the front side or public network. Depending upon whether the data pool uses replication or erasure coding for data durability, the network requirements for the back side or cluster network should be quantified appropriately. Finally, verify network throughput before installing and testing Ceph. Most performance-related problems in Ceph usually begin with a networking issue. Simple network issues like a kinked or bent Cat-6 cable could result in degraded bandwidth. Use a minimum of 10Gbe for the front side network. For large clusters, consider using 40Gbe for the backend or cluster network. Alternatively, use LACP mode 4 to bond networks. Additionally, use jumbo frames (MTU 9000), especially on the backend or cluster network. 3.4. Considering Uninterrupted Power Supplies Since Ceph writes are atomic- all or nothing- it isn't a requirement to invest in uninterruptable power supplies (UPS) for Ceph OSD nodes. However, Red Hat recommends investing in UPSs for Ceph Monitor nodes. Monitors use leveldb , which is sensitive to synchronous write latency. A power outage could cause corruption, requiring technical support to restore the state of the cluster. Ceph OSDs may benefit from the use of a UPS if a storage controller uses a writeback cache. In this scenario, a UPS may help prevent filesystem corruption during a power outage if the controller doesn't flush the writeback cache in time. 3.5. Selecting Hardware for Use Cases A major advantage of Ceph Storage is that it can be configured to support many use cases. Generally, Red Hat recommends configuring OSD hosts identically for a particular use case. The three primary use cases for a Ceph Storage cluster are: IOPS optimized Throughput optimized Capacity optimized Since these use cases typically have different drive, HBA controller and networking requirements among other factors, configuring a series of identical hosts to facilitate all of these use cases with a single node configuration is possible, but is not necessarily recommended. Using the same hosts to facilitate multiple CRUSH hierarchies will involve the use of logical, rather than actual host names in the CRUSH map. Additionally, deployment tools such as Ansible would need to consider a group for each use case, rather than deploying all OSDs in the default [osds] group. Note Generally, it is easier to configure and manage hosts that serve a single use case, such as high IOPS, high throughput, or high capacity. 3.6. Selecting Media for Indexes When selecting OSD hardware for use with a Ceph Object Gateway-- irrespective of the use case-- it is required to have an OSD node that has at least one high performance drive, either an SSD or NVMe drive, for storing the index pool. This is particularly important when buckets contain a large number of objects. For Red Hat Ceph Storage running Bluestore, Red Hat recommends deploying an NVMe drive as a block.db device, rather than as a separate pool. Ceph Object Gateway index data is written only into an object map (OMAP). OMAP data for BlueStore resides on the block.db device on an OSD. When an NVMe drive functions as a block.db device for an HDD OSD and when the index pool is backed by HDD OSDs, the index data will ONLY be written to the block.db device. As long as the block.db partition/lvm is sized properly at 4% of block, this configuration is all that is needed for BlueStore. Note Red Hat does not support HDD devices for index pools. For more information on supported configurations, see the Red Hat Ceph Storage: Supported configurations article. An index entry is approximately 200 bytes of data, stored as an OMAP in rocksdb . While this is a trivial amount of data, some uses of Ceph Object Gateway can result in tens or hundreds of millions of objects in a single bucket. By mapping the index pool to a CRUSH hierarchy of high performance storage media, the reduced latency provides a dramatic performance improvement when buckets contain very large numbers of objects. Important In a production cluster, a typical OSD node will have at least one SSD or NVMe drive for storing the OSD journal and the index pool or block.db device, which will use separate partitions or logical volumes when using the same physical drive. 3.7. Selecting Media for Monitor Nodes Ceph monitors use leveldb , which is sensitive to synchronous write latency. Red Hat strongly recommends using SSDs to store monitor data. Ensure that the selected SSDs have sufficient sequential write and throughput characteristics.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/object_gateway_for_production_guide/assembly-considering-hardware-rgw-prod
Chapter 4. Performing operations with the Image service (glance)
Chapter 4. Performing operations with the Image service (glance) You can create and manage images in the Red Hat OpenStack Services on OpenShift (RHOSO) Image service (glance). Note To execute openstack client commands on the cloud, you must specify the name of the cloud detailed in your clouds.yaml file. You can specify the name of the cloud by using one of the following methods: Use the --os-cloud option with each command: Use this option if you access more than one cloud. Create an environment variable for the cloud name in your bashrc file: Prerequisites The administrator has created a project for you, and they have provided you with a clouds.yaml file for you to access the cloud. You have installed the python-openstackclient package. 4.1. Creating OS images To create OS images that you can manage in the Image service (glance), you can use Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM) instance images, or you can manually create RHOSO-compatible images in the QCOW2 format by using RHEL ISO files or Windows ISO files. 4.1.1. Virtual machine image formats A virtual machine (VM) image is a file that contains a virtual disk with a bootable OS installed. Red Hat OpenStack Services on OpenShift (RHOSO) supports VM images in different formats. The disk format of a VM image is the format of the underlying disk image. The container format indicates if the VM image is in a file format that also contains metadata about the VM. When you add an image to the Image service (glance), you can set the disk or container format for your image to any of the values in the following tables by using the --disk-format and --container-format command options with the openstack image create , glance image-create-via-import , and openstack image set commands. If you are not sure of the container format of your VM image, you can set it to bare . Table 4.1. Disk image formats Format Description aki Indicates an Amazon kernel image that is stored in the Image service. ami Indicates an Amazon machine image that is stored in the Image service. ari Indicates an Amazon ramdisk image that is stored in the Image service. iso Sector-by-sector copy of the data on a disk, stored in a binary file. Although an ISO file is not normally considered a VM image format, these files contain bootable file systems with an installed operating system, and you use them in the same way as other VM image files. ploop A disk format supported and used by Virtuozzo to run OS containers. qcow2 Supported by QEMU emulator. This format includes QCOW2v3 (sometimes referred to as QCOW3), which requires QEMU 1.1 or higher. raw Unstructured disk image format. vdi Supported by VirtualBox VM monitor and QEMU emulator. vhd Virtual Hard Disk. Used by VM monitors from VMware, VirtualBox, and others. vhdx Virtual Hard Disk v2. Disk image format with a larger storage capacity than VHD. vmdk Virtual Machine Disk. Disk image format that allows incremental backups of data changes from the time of the last backup. Table 4.2. Container image formats Format Description aki Indicates an Amazon kernel image that is stored in the Image service. ami Indicates an Amazon machine image that is stored in the Image service. ari Indicates an Amazon ramdisk image that is stored in the Image service. bare Indicates there is no container or metadata envelope for the image. docker Indicates a TAR archive of the file system of a Docker container that is stored in the Image service. ova Indicates an Open Virtual Appliance (OVA) TAR archive file that is stored in the Image service. This file is stored in the Open Virtualization Format (OVF) container file. ovf OVF container file format. Open standard for packaging and distributing virtual appliances or software to be run on virtual machines. 4.1.2. Creating RHEL KVM images Use Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM) instance images to create images that you can manage in the Red Hat OpenStack Services on OpenShift (RHOSO) Image service (glance). 4.1.2.1. Using a RHEL KVM instance image You can use the following Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM) instance image with Red Hat OpenStack Services on OpenShift (RHOSO): Red Hat Enterprise Linux 9 KVM Guest Image QCOW2 images are configured with cloud-init and must have EC2-compatible metadata services for provisioning Secure Shell (SSH) keys to function correctly. Ready Windows KVM instance images in QCOW2 format are not available. Note For KVM instance images: The root account in the image is deactivated, but sudo access is granted to a special user named cloud-user . There is no root password set for this image. The root password is locked in /etc/shadow by placing !! in the second field. For a RHOSO instance, generate an SSH keypair from the RHOSO dashboard or command line, and use that key combination to perform an SSH public authentication to the instance as root user. When you launch the instance, this public key is injected to it. You can then authenticate by using the private key that you download when you create the keypair. 4.1.2.2. Creating a RHEL-based root partition image for bare-metal instances To create a custom root partition image for bare-metal instances, download the base Red Hat Enterprise Linux KVM instance image, and then upload the image to the Image service (glance). Procedure Download the base Red Hat Enterprise Linux KVM instance image from the Customer Portal . Define DIB_LOCAL_IMAGE as the downloaded image: Replace <ver> with the RHEL version number of the image. Set your registration information depending on your method of registration: Red Hat Customer Portal: Red Hat Satellite: Replace values in angle brackets <> with the correct values for your Red Hat Customer Portal or Red Hat Satellite registration. Optional: If you have any offline repositories, you can define DIB_YUM_REPO_CONF as a local repository configuration: Replace <file-path> with the path to your local repository configuration file. Use the diskimage-builder tool to extract the kernel as rhel-image.vmlinuz and the initial RAM disk as rhel-image.initrd : Upload the images to the Image service: 4.1.2.3. Creating a RHEL-based whole-disk user image for bare-metal instances To create a whole-disk user image for bare-metal instances, download the base Red Hat Enterprise Linux KVM instance image, and then upload the image to the Image service (glance). Procedure Download the base Red Hat Enterprise Linux KVM instance image from the Customer Portal . Define DIB_LOCAL_IMAGE as the downloaded image: Replace <ver> with the RHEL version number of the image. Set your registration information depending on your method of registration: Red Hat Customer Portal: Red Hat Satellite: Replace values in angle brackets <> with the correct values for your Red Hat Customer Portal or Red Hat Satellite registration. Optional: If you have any offline repositories, you can define DIB_YUM_REPO_CONF as a local repository configuration: Replace <file-path> with the path to your local repository configuration file. Upload the image to the Image service: 4.1.3. Creating instance images with RHEL or Windows ISO files You can create custom Red Hat Enterprise Linux (RHEL) or Windows images in QCOW2 format from ISO files, and upload these images to the Red Hat OpenStack Services on OpenShift (RHOSO) Image service (glance) for use when creating instances. 4.1.3.1. Prerequisites A Linux host machine to create an image. This can be any machine on which you can install and run the Linux packages, except for the undercloud or the overcloud. The advanced-virt repository is enabled: The virt-manager application is installed to have all packages necessary to create a guest operating system: The libguestfs-tools package is installed to have a set of tools to access and modify virtual machine images: A RHEL 9 ISO file or a Windows ISO file. For more information about RHEL ISO files, see RHEL 9.0 Binary DVD . If you do not have a Windows ISO file, see the Microsoft Evaluation Center to download an evaluation image. A text editor, if you want to change the kickstart files (RHEL only). Important If you install the libguestfs-tools package on the undercloud, deactivate iscsid.socket to avoid port conflicts with the tripleo_iscsid service on the undercloud: When you have the prerequisites in place, you can proceed to create a RHEL or Windows image: Create a Red Hat Enterprise Linux 9 image Create a Windows image 4.1.3.2. Creating a Red Hat Enterprise Linux 9 image You can create a Red Hat OpenStack Services on OpenShift (RHOSO) image in QCOW2 format by using a Red Hat Enterprise Linux (RHEL) 9 ISO file. Procedure Log on to your host machine as the root user. Start the installation by using virt-install : Replace the values in angle brackets <> with the correct values for your RHEL 9 image. This command launches an instance and starts the installation process. Note If the instance does not launch automatically, run the virt-viewer command to view the console: Configure the instance: At the initial Installer boot menu, select Install Red Hat Enterprise Linux 9 . Choose the appropriate Language and Keyboard options. When prompted about which type of devices your installation uses, select Auto-detected installation media . When prompted about which type of installation destination, select Local Standard Disks . For other storage options, select Automatically configure partitioning . In the Which type of installation would you like? window, choose the Basic Server install, which installs an SSH server. For network and host name, select eth0 for network and choose a host name for your device. The default host name is localhost.localdomain . Enter a password in the Root Password field and enter the same password again in the Confirm field. When the on-screen message confirms that the installation is complete, reboot the instance and log in as the root user. Update the /etc/sysconfig/network-scripts/ifcfg-eth0 file so that it contains only the following values: Reboot the machine. Register the machine with the Content Delivery Network. Replace pool-id with a valid pool ID. You can see a list of available pool IDs by running the subscription-manager list --available command. Update the system: Install the cloud-init packages: Edit the /etc/cloud/cloud.cfg configuration file and add the following content under cloud_init_modules : The resolv-conf option automatically configures the resolv.conf file when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain , and other options. Add the following line to /etc/sysconfig/network to avoid issues when accessing the EC2 metadata service: To ensure that the console messages appear in the Log tab on the dashboard and the nova console-log output, add the following boot option to the /etc/default/grub file: Run the grub2-mkconfig command: The output is as follows: Deregister the instance so that the resulting image does not contain the subscription details for this instance: Power off the instance: Reset and clean the image by using the virt-sysprep command so that it can be used to create instances without issues: Reduce the image size by converting any free space in the disk image back to free space in the host: This command creates a new <rhel9-cloud.qcow2> file in the location from where the command is run. Note You must manually resize the partitions of instances based on the image in accordance with the disk space in the flavor that is applied to the instance. The <rhel9-cloud.qcow2> image file is ready to be uploaded to the Image service. For more information about uploading this image to your RHOSO deployment, see Uploading images to the Image service . 4.1.3.3. Creating a Windows image You can create a Red Hat OpenStack Services on OpenShift (RHOSO) image in QCOW2 format by using a Windows ISO file. Procedure Log on to your host machine as the root user. Start the installation by using virt-install : Replace the values in angle brackets <> withe the correct values for your Windows image. Note The --os-type=windows parameter ensures that the clock is configured correctly for the Windows instance and enables its Hyper-V enlightenment features. You must also set os_type=windows in the image metadata before uploading the image to the Image service (glance). The virt-install command saves the instance image as /var/lib/libvirt/images/<windows-image>.qcow2 by default. If you want to keep the instance image elsewhere, change the parameter of the --disk option: Replace <file-name> with the name of the file that stores the instance image, and optionally its path. For example, path=win8.qcow2,size=8 creates an 8 GB file named win8.qcow2 in the current working directory. Note If the instance does not launch automatically, run the virt-viewer command to view the console: For more information about how to install Windows, see the Microsoft documentation. To allow the newly-installed Windows system to use the virtualized hardware, you might need to install VirtIO drivers. For more information, see Installing KVM paravirtualized drivers for Windows virtual machines in Configuring and managing virtualization . To complete the configuration, download and run Cloudbase-Init on the Windows system. At the end of the installation of Cloudbase-Init, select the Run Sysprep and Shutdown checkboxes. The Sysprep tool makes the instance unique by generating an OS ID, which is used by certain Microsoft services. Important Red Hat does not provide technical support for Cloudbase-Init. If you encounter an issue, see Contact Cloudbase Solutions . When the Windows system shuts down, the <windows-image.qcow2> image file is ready to be uploaded to the Image service. For more information about uploading this image to your RHOSO deployment, see Uploading images to the Image service . 4.1.4. Creating an image for UEFI Secure Boot If your Red Hat OpenStack Services on OpenShift (RHOSO) deployment contains UEFI Secure Boot Compute nodes, you can create a Secure Boot image that cloud users can use to launch Secure Boot instances. Procedure Create a new image for UEFI Secure Boot: Replace <base_image_file> with an image file that supports UEFI and the GUID Partition Table (GPT) standard, and includes an EFI system partition. Replace <container_format> with one of the following container formats: none, ami, ari, aki, bare, ovf, ova, docker Replace <disk_format> with one of the following disk formats: none, ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso, ploop. If the default machine type is not q35 , then set the machine type to q35 : Specify that the instance must be scheduled on a UEFI Secure Boot host: 4.1.5. Metadata properties for virtual hardware The Compute service (nova) has deprecated support for using libosinfo data to set default device models. Instead, use the following image metadata properties to configure the optimal virtual hardware for an instance: os_distro os_version hw_cdrom_bus hw_disk_bus hw_scsi_model hw_vif_model hw_video_model hypervisor_type 4.2. Uploading, importing, and managing images Manage images and the properties and formats of images that you upload, import, or store in the Red Hat OpenStack Services on OpenShift (RHOSO) Image service (glance). 4.2.1. Uploading images to the Image service You can upload an image to the OpenStack Image service (glance) by using the openstack image create command with the --property option. Procedure Use the openstack image create command with the property option to upload an image. For example: Replace <name> with a descriptive name for your image. Replace <disk-format> with one of the following disk formats: none, ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso, ploop. Replace <container-format> with one of the following container formats: none, ami, ari, aki, bare, ovf, ova, docker. Replace </path/to/image> with the file path to your image file. Replace <os_version> and <11.10> with the key-value pair of the property you want to associate to your image. You can use the --property option multiple times with different key-value pairs you want to associate to your image. 4.2.2. Image service image import methods You can import images to the Image service (glance) by using the following methods: Use the web-download (default) method to import images from a URI. Use the copy-image method to copy an existing image to other Image service back ends that are in your deployment. Use this import method only if multiple Image service back ends are enabled in your deployment. The web-download method is enabled by default, but the administrator configures other import methods. You can run the openstack image import info command to list available import options. 4.2.2.1. Importing an image from a remote URI You can use the web-download image import method to copy an image from a remote URI to the OpenStack Image service (glance). The Image service web-download method uses a two-stage process to perform the import: The web-download method creates an image record. The web-download method retrieves the image from the specified URI. The URI is subject to optional allowlist and blocklist filtering. If the Inject Image Metadata plugin is enabled in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment, the plugin might inject metadata properties to the image. These metadata properties determine which Compute nodes the image instances are launched on. Procedure Create an image and specify the URI of the image to import: Replace <container_format> with one of the following container formats: none, ami, ari, aki, bare, ovf, ova, docker Replace <disk_format> with one of the following disk formats: none, ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso, ploop. Replace <name> with a descriptive name for your image. Replace <uri> with the URI of your image. Verification Check the availability of the image: Replace <image-id> with the image ID you provided during image creation. 4.2.2.2. Importing an image from a local volume The glance-direct image import method creates an image record, which generates an image ID. When you upload an image to the Image service (glance) from a local volume, the image is stored in a staging area and becomes active when it passes any configured checks. Note The glance-direct method requires a shared staging area when used in a highly available (HA) configuration. If you upload images by using the glance-direct import method, the upload can fail in a HA environment if a shared staging area is not present. In a HA active-active environment, API calls are distributed to the Image service controllers. The download API call can be sent to a different controller than the API call to upload the image. The glance-direct image import method uses three different calls to import an image: openstack image create openstack image stage openstack image import You can use the glance image-create-via-import command to perform all three of the glance-direct calls in one command. Procedure Use the glance image-create-via-import command to import a local image: Replace <container-format> with one of the following container formats: none, ami, ari, aki, bare, ovf, ova, docker Replace <disk-format> with one of the following disk formats: none, ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso, ploop. Replace <name> with a descriptive name for your image. Replace </path/to/image> with the file path to your image file. When the image moves from the staging area to the back-end storage location, the image is listed. However, it might take some time for the image to become active. Verification Check the availability of the image: Replace <image-id> with the image ID you provided during image creation. 4.2.3. Converting the format of an image When you import an image to the Image service (glance), you can convert the image to a different format if your administrator has configured the Image Conversion plugin with a preferred format for images in your Red Hat OpenStack Services on OpenShift (RHOSO) deployment. For example,if you import a QCOW2 image to the Image service and the Image Conversion plugin is configured to the preferred format of RAW, your QCOW2 image is converted to the RAW format when you import it. You can trigger image conversion only when you import an image. It does not run when you upload an image. When you import an image to the Image service, the bits of the image are stored in a particular format in a temporary location. When you activate the Image Conversion plugin, the image is converted to the target format and moved to a final storage destination. When the task is finished, the Image service deletes the temporary location. The Image service does not retain the format that you initially uploaded. Procedure Convert the format of an image by using the web-download or glance-direct import method: Convert the format by using the glance image-create-via-import command with web-download : Replace <disk-format> with one of the following disk formats: none, ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso, ploop. Replace <container-format> with one of the following container formats: none, ami, ari, aki, bare, ovf, ova, docker Replace <name> with a descriptive name for your image. Replace <http://server/image.qcow2> with the URI of your image. Convert the format by using the glance-direct image import method: Replace <local_file.qcow2> with your image file. 4.2.3.1. Converting an image to RAW format manually To launch instances from images that are stored in Red Hat Ceph Storage more efficiently, the image format must be RAW. If your administrator has enabled the Image Conversion plugin for your Red Hat OpenStack Services on OpenShift (RHOSO) deployment, your QCOW2 images are automatically converted to RAW format when you import them to the Image service. Alternatively, you can convert the image manually. Procedure When you convert an image to RAW format, the RAW image is larger in size than the original QCOW2 image file. Run the following command before the conversion to determine the final RAW image size: Replace <image_id> with the ID of your QCOW2 image. Convert the image from QCOW2 to RAW format: 4.2.3.2. Storing an image in RAW format With the GlanceImageImportPlugins parameter enabled, run the following command to store a previously created image in RAW format: Replace <name> with the name of the image; this is the name that will appear in openstack image list . Replace <http://server/image.qcow2> with the location and file name of the QCOW2 image. Note This command example creates the image record and imports it by using the web-download method. 4.2.4. Updating image properties You can update the properties of an image you have stored in the Image service (glance) by using the openstack image set command with the --property option. Procedure Use the openstack image set command with the --property option to update an image. For example: Replace <image-id> with the ID of the image you want to update. Replace <architecture> and <x86_64> with the key-value pair of the property you want to update for your image. You can use the --property option multiple times with different key-value pairs you want to associate to your image. 4.2.5. Hiding or unhiding images You can hide public images from normal listings presented to cloud users. For example, you can hide obsolete CentOS 7 images and show only the latest version to simplify the user experience. By default, project administrators and project members can delete images. Cloud users can discover and use hidden images. To create a hidden image, add the --hidden argument to the openstack image create command. Procedure Hide an image: Unhide an image: List hidden images: 4.2.6. Deleting images from the Image service Use the openstack image delete command to delete one or more images that you do not need to store in the Image service (glance). By default, project administrators and project members can delete images. Procedure Delete one or more images: Replace <image-id> with the ID of the image you want to delete. Warning The openstack image delete command permanently deletes the image and all copies of the image, as well as the image instance and metadata. 4.3. Importing and copying images to single or multiple stores When you configure the Image service (glance) to use Red Hat Ceph Storage as a back end, you can import image data from a local file system or a web server to multiple Ceph Storage clusters. You can import an image from a web server to multiple stores at once. If the image is not available on a web server, you can import the image from a local file system to the central store, and then copy it to other stores. Important Always store an image copy on the central site, even if there are no instances using the image at the central location. 4.3.1. Importing image data to a single store You can use the Image service (glance) to import image data to a single store. Procedure Import image data to a single store: Replace <image-name> with the name of the image you want to import. Replace <uri> with the URI of the image. Replace <store> with the name of the store to which you want to copy the image data. Note If you do not include the options of --stores , --all-stores , or --store in the command, the Image service creates the image in the central store. Verify that the image data was added to specific stores: Replace <image-id> with the ID of the original image. The output displays a comma-delimited list of stores. 4.3.2. Importing image data to multiple stores Because the default setting of the --allow-failure parameter is true , you do not need to include the parameter in the command if it is acceptable for some stores to fail to import the image data. Note This procedure does not require all stores to successfully import the image data. Procedure Import image data to multiple, specified stores: Replace <image-name> with the name of the image you want to import. Replace <uri> with the URI of the image. Replace <store-1> , <store-2> , and <store-3> with the names of the stores to which you want to import the image data. Alternatively, replace --stores with --all-stores true to upload the image to all the stores. 4.3.3. Importing image data to all stores without failure This procedure requires all stores to successfully import the image data. Procedure Import image data to multiple, specified stores: Replace <image-name> with the name of the image you want to import. Replace <uri> with the URI of the image. Replace <store-1> , <store-2> , and <store-3> with the names of stores to which you want to copy the image data. Alternatively, replace --stores with --all-stores true to upload the image to all the stores. Note With the --allow-failure parameter set to false , the Image service (glance) does not ignore stores that fail to import the image data. You can view the list of failed stores with the image property os_glance_failed_import . For more information, see Section 4.3.4, "Checking the progress of the image import operation" . Verification Verify that the image data was added to specific stores: Replace <image-id> with the ID of the original existing image. The output displays a comma-delimited list of stores. 4.3.4. Checking the progress of the image import operation The image import workflow sequentially imports image data into stores. The size of the image, the number of stores, and the network speed between the central site and the edge sites impact how long it takes for the image import operation to complete. You can follow the progress of the image import by looking at two image properties, which appear in notifications sent during the image import operation: The os_glance_importing_to_stores property lists the stores that have not imported the image data. At the beginning of the import, all requested stores show up in the list. Each time a store successfully imports the image data, the Image service removes the store from the list. The os_glance_failed_import property lists the stores that fail to import the image data. This list is empty at the beginning of the image import operation. Note In the following procedure, the environment has three Red Hat Ceph Storage clusters: the central store and two stores at the edge, dcn0 and dcn1 . Procedure Verify that the image data was added to specific stores: Replace <image-id> with the ID of the original image. The output displays a comma-delimited list of stores similar to the following example snippet: Monitor the status of the image import operation. When you precede a command with watch , the command output refreshes every two seconds. Replace <image-id> with the ID of the original image. The status of the operation changes as the image import operation progresses: Output that shows that an image failed to import resembles the following example: After the operation completes, the status changes to active: 4.3.5. Managing image import failures You can manage failures of the image import operation by using the --allow-failure parameter: If the value of the --allow-failure parameter to true , the image status becomes active after the first store successfully imports the data. This is the default setting. You can view a list of stores that failed to import the image data by using the os_glance_failed_import image property. If you set the value of the --allow-failure parameter to false , the image status only becomes active after all specified stores successfully import the data. Failure of any store to import the image data results in an image status of failed . The image is not imported into any of the specified stores. 4.3.6. Copying an image to specific stores Use the following procedure to copy image data to one or more specific stores. Procedure Copy image data to one or more specific stores. Copy image data to a single store: Replace <image_id> with the name of the image you want to copy. Replace <store_id> with the name of the stores to which you want to copy the image data. Copy image data to specific stores: Replace <store-1> and <store-2> with the names of the stores to which you want to copy the image data. Confirm that the image data successfully replicated to the specified stores: For information about how to check the status of the image import operation, see Section 4.3.4, "Checking the progress of the image import operation" . 4.3.7. Copying an image to multiple stores You can use the Image service (glance) to copy image data to multiple Red Hat Ceph Storage stores at the edge by using the image import workflow. Note The image must be present at the central site before you copy it to any edge sites. Only the image owner or project administrator can copy existing images to newly added stores. You can copy existing image data either by setting --all-stores to true or by specifying specific stores to receive the image data. The default setting for the --all-stores option is false . If --all-stores is false , you must specify which stores receive the image data by using --stores <store-1>,<store-2> . If the image data is already present in any of the specified stores, the request fails. If you set all-stores to true , and the image data already exists in some of the stores, then those stores are excluded from the list. After you specify which stores receive the image data, the Image service copies data from the central site to a staging area. Then, the Image service imports the image data by using the image import workflow. Important Avoid closely timed copy-image operations for the same image because they can cause race conditions and unexpected results. Existing image data remains as it is, but copying data to new stores fails. 4.3.8. Copying an image to all stores Use the following procedure to copy image data to all available stores. Procedure Copy image data to all available stores: Replace <image-id> with the ID of the image you want to copy. Confirm that the image data successfully replicated to all available stores: For information about how to check the status of the image import operation, see Section 4.3.4, "Checking the progress of the image import operation" . 4.3.9. Deleting an image from a specific store Delete an existing image copy on a specific store by using the Red Hat OpenStack Services on OpenShift (RHOSO) Image service (glance). Procedure Delete an image from a specific store: Replace <store-id> with the name of the store on which the image copy should be deleted. Replace <image-id> with the ID of the image you want to delete. Warning The openstack image delete --store <store-id> command permanently deletes the image across all the sites. All image copies are deleted, as well as the image instance and metadata. 4.3.10. Listing image locations and location properties Although an image can be present on multiple sites, there is only a single Universal Unique Identifier (UUID) for a given image. The image metadata contains the locations of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site and the two edge sites. Procedure Show the sites on which a copy of the image exists: In the example, the image is present on the central site, the default_backend , and on the two edge sites dcn1 and dcn2 . Alternatively, you can run the openstack image list command with the --include-stores option to see the sites where the images exist: List the image location properties to show the details of each location: The image properties show the different Ceph RBD URIs for the location of each image. In the example, the central image location URI is: The URI is composed of the following data: 79b70c32-df46-4741-93c0-8118ae2ae284 corresponds to the central Ceph FSID. Each Ceph cluster has a unique FSID. The default value for all sites is images , which corresponds to the Ceph pool on which the images are stored. 2bd882e7-1da0-4078-97fe-f1bb81f61b00 corresponds to the image UUID. The UUID is the same for a given image regardless of its location. The metadata shows the glance store to which this location maps. In this example, it maps to the default_backend , which is the central hub site. 4.3.11. Adding an Image service API Administrators can add a new Image service API ( glanceAPI ) to a Red Hat OpenStack Services on OpenShift (RHOSO) deployment to support multiple workloads or to maintain the lifecycle of an existing glanceAPI and its back-end services. For example, if your deployment has a back end with a split layout, such as Red Hat Ceph Storage, and a back end with a single layout, such as NFS, you cannot make changes to the single or split layout because they impact configuration elements like PersistentVolumeClaims (PVCs). Instead, you can add a new glanceAPI to switch between the back ends. Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml , and add the parameters to the glance template to configure a new glanceAPI . In the following example, there is an existing default API that uses the Object Storage service (swift) as a back end, and you update the OpenStackControlPlane to deploy a new default1 API: Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 4.3.12. Decommissioning an Image service API To decommission an existing glanceAPI , administrators must do the following: Delete the glanceAPI CR and its associated objects, for example, pods and StatefulSets . Update the keystoneEndpoint to point to an active glanceAPI . You cannot delete a glanceAPI if it is the only glanceAPI in the OpenStackControlPlane , and you cannot point the keystoneEndpoint parameter in your OpenStackControlPlane CR file to a non-existent glanceAPI . When you remove a glanceAPI , PersistentVolumeClaims (PVCs) that are associated to the API are preserved so that you can re-add the API with its settings if required. Procedure Verify that more that one glanceAPI is deployed in the OpenStackControlPlane : Identify the current glanceAPI that is registered in the Keystone catalog: Verify that the new glanceAPI has a Keystone endpoint: If the glanceAPI that you are removing is the API registered in the Keystone catalog, open your OpenStackControlPlane CR file, openstack_control_plane.yaml to decommission the API and update the keystoneEndpoint parameter. In the following example, you remove the glanceAPI that is named default and update the keystoneEndpoint parameter to default1 : Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 4.4. Image service command options and properties You can use optional arguments, properties, and property keys with the openstack image create , glance image-create-via-import , and openstack image set commands. 4.4.1. Image service command options You can use the following optional arguments with the openstack image create , glance image-create-via-import , and openstack image set commands. Table 4.3. Command options Specific to Option Description All --architecture <ARCHITECTURE> Operating system architecture as specified in https://docs.openstack.org/glance/latest/user/common-image-properties.html#architecture All --protected [True_False] If true, image will not be deletable. All --name <NAME> Descriptive name for the image All --instance-uuid <INSTANCE_UUID> Metadata that can be used to record which instance this image is associated with. (Informational only, does not create an instance snapshot.) All --min-disk <MIN_DISK> Amount of disk space (in GB) required to boot image. All --visibility <VISIBILITY> Scope of image accessibility. Valid values: public, private, community, shared All --kernel-id <KERNEL_ID> ID of image stored in the Image service (glance) that should be used as the kernel when booting an AMI-style image. All --os-version <OS_VERSION> Operating system version as specified by the distributor All --disk-format <DISK_FORMAT> Format of the disk. Valid values: none, ami, ari, aki, vhd, vhdx, vmdk, raw, qcow2, vdi, iso, ploop All --os-distro <OS_DISTRO> Common name of operating system distribution as specified in https://docs.openstack.org/glance/latest/user/common-image-properties.html#os-distro All --owner <OWNER> Owner of the image All --ramdisk-id <RAMDISK_ID> ID of image stored in the Image service that should be used as the ramdisk when booting an AMI-style image. All --min-ram <MIN_RAM> Amount of RAM (in MB) required to boot image. All --container-format <CONTAINER_FORMAT> Format of the container. Valid values: none, ami, ari, aki, bare, ovf, ova, docker All --property <key=value> Arbitrary property to associate with image. May be used multiple times. openstack image create --tags <TAGS> [<TAGS> ...] List of strings related to the image openstack image create --id <ID> An identifier for the image openstack image set --remove-property Key name of arbitrary property to remove from the image. 4.4.2. Image properties and property keys You can use the following keys with the property option for with the openstack image create , glance image-create-via-import , and openstack image set commands. Table 4.4. Property keys Specific to Key Description Supported values All architecture The CPU architecture that must be supported by the hypervisor. For example, x86_64 , arm , or ppc64 . Run uname -m to get the architecture of a machine. aarch - ARM 64-bit alpha - DEC 64-bit RISC armv7l - ARM Cortex-A7 MPCore cris - Ethernet, Token Ring, AXis-Code Reduced Instruction Set i686 - Intel sixth-generation x86 (P6 micro architecture) ia64 - Itanium lm32 - Lattice Micro32 m68k - Motorola 68000 microblaze - Xilinx 32-bit FPGA (Big Endian) microblazeel - Xilinx 32-bit FPGA (Little Endian) mips - MIPS 32-bit RISC (Big Endian) mipsel - MIPS 32-bit RISC (Little Endian) mips64 - MIPS 64-bit RISC (Big Endian) mips64el - MIPS 64-bit RISC (Little Endian) openrisc - OpenCores RISC parisc - HP Precision Architecture RISC parisc64 - HP Precision Architecture 64-bit RISC ppc - PowerPC 32-bit ppc64 - PowerPC 64-bit ppcemb - PowerPC (Embedded 32-bit) s390 - IBM Enterprise Systems Architecture/390 s390x - S/390 64-bit sh4 - SuperH SH-4 (Little Endian) sh4eb - SuperH SH-4 (Big Endian) sparc - Scalable Processor Architecture, 32-bit sparc64 - Scalable Processor Architecture, 64-bit unicore32 - Microprocessor Research and Development Center RISC Unicore32 x86_64 - 64-bit extension of IA-32 xtensa - Tensilica Xtensa configurable microprocessor core xtensaeb - Tensilica Xtensa configurable microprocessor core (Big Endian) All hypervisor_type The hypervisor type. kvm , vmware All instance_uuid For snapshot images, this is the UUID of the server used to create this image. Valid server UUID All kernel_id The ID of an image stored in the Image Service that should be used as the kernel when booting an AMI-style image. Valid image ID All os_distro The common name of the operating system distribution in lowercase. arch - Arch Linux. Do not use archlinux or org.archlinux . centos - Community Enterprise Operating System. Do not use org.centos or CentOS . debian - Debian. Do not use Debian or org.debian . fedora - Fedora. Do not use Fedora , org.fedora , or org.fedoraproject . freebsd - FreeBSD. Do not use org.freebsd , freeBSD , or FreeBSD . gentoo - Gentoo Linux. Do not use Gentoo or org.gentoo . mandrake - Mandrakelinux (MandrakeSoft) distribution. Do not use mandrakelinux or MandrakeLinux . mandriva - Mandriva Linux. Do not use mandrivalinux . mes - Mandriva Enterprise Server. Do not use mandrivaent or mandrivaES . msdos - Microsoft Disc Operating System. Do not use ms-dos . netbsd - NetBSD. Do not use NetBSD or org.netbsd . netware - Novell NetWare. Do not use novell or NetWare . openbsd - OpenBSD. Do not use OpenBSD or org.openbsd . opensolaris - OpenSolaris. Do not use OpenSolaris or org.opensolaris . opensuse - openSUSE. Do not use suse , SuSE , or org.opensuse . rhel - Red Hat Enterprise Linux. Do not use redhat , RedHat , or com.redhat . sled - SUSE Linux Enterprise Desktop. Do not use com.suse . ubuntu - Ubuntu. Do not use Ubuntu , com.ubuntu , org.ubuntu , or canonical . windows - Microsoft Windows. Do not use com.microsoft.server . All os_version The operating system version as specified by the distributor. Version number (for example, "11.10") All ramdisk_id The ID of image stored in the Image Service that should be used as the ramdisk when booting an AMI-style image. Valid image ID All vm_mode The virtual machine mode. This represents the host/guest ABI (application binary interface) used for the virtual machine. hvm -Fully virtualized. This is the mode used by QEMU and KVM. libvirt API driver hw_cdrom_bus Specifies the type of disk controller to attach CD-ROM devices to. scsi , virtio , ide , or usb . If you specify iscsi , you must set the hw_scsi_model parameter to virtio-scsi . libvirt API driver hw_disk_bus Specifies the type of disk controller to attach disk devices to. scsi , virtio , ide , or usb . Note that if using iscsi , the hw_scsi_model needs to be set to virtio-scsi . libvirt API driver hw_firmware_type Specifies the type of firmware to use to boot the instance. Set to one of the following valid values: bios uefi libvirt API driver hw_machine_type Enables booting an ARM system using the specified machine type. If an ARM image is used and its machine type is not explicitly specified, then Compute uses the virt machine type as the default for ARMv7 and AArch64. Valid types can be viewed by using the virsh capabilities command. The machine types are displayed in the machine tag. libvirt API driver hw_numa_nodes Number of NUMA nodes to expose to the instance (does not override flavor definition). Integer. libvirt API driver hw_numa_cpus.0 Mapping of vCPUs N-M to NUMA node 0 (does not override flavor definition). Comma-separated list of integers. libvirt API driver hw_numa_cpus.1 Mapping of vCPUs N-M to NUMA node 1 (does not override flavor definition). Comma-separated list of integers. libvirt API driver hw_numa_mem.0 Mapping N MB of RAM to NUMA node 0 (does not override flavor definition). Integer libvirt API driver hw_numa_mem.1 Mapping N MB of RAM to NUMA node 1 (does not override flavor definition). Integer libvirt API driver hw_pci_numa_affinity_policy Specifies the NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces. Set to one of the following valid values: required : The Compute service creates an instance that requests a PCI device only when at least one of the NUMA nodes of the instance has affinity with the PCI device. This option provides the best performance. preferred : The Compute service attempts a best effort selection of PCI devices based on NUMA affinity. If affinity is not possible, then the Compute service schedules the instance on a NUMA node that has no affinity with the PCI device. legacy : (Default) The Compute service creates instances that request a PCI device in one of the following cases: The PCI device has affinity with at least one of the NUMA nodes. The PCI devices do not provide information about their NUMA affinities. libvirt API driver hw_qemu_guest_agent Guest agent support. If set to yes , and if qemu-ga is also installed, file systems can be quiesced (frozen) and snapshots created automatically. yes / no libvirt API driver hw_rng_model Adds a random number generator (RNG) device to instances launched with this image. The instance flavor enables the RNG device by default. To disable the RNG device, the administrator must set hw_rng:allowed to False on the flavor. The default entropy source is /dev/random . To specify a hardware RNG device, set rng_dev_path to /dev/hwrng in your Compute environment file. virtio , or other supported device. libvirt API driver hw_scsi_model Enables the use of VirtIO SCSI (virtio-scsi) to provide block device access for compute instances; by default, instances use VirtIO Block (virtio-blk). VirtIO SCSI is a para-virtualized SCSI controller device that provides improved scalability and performance, and supports advanced SCSI hardware. virtio-scsi libvirt API driver hw_tpm_model Set to the model of TPM device to use. Ignored if hw:tpm_version is not configured. tpm-tis : (Default) TPM Interface Specification. tpm-crb : Command-Response Buffer. Compatible only with TPM version 2.0. libvirt API driver hw_tpm_version Set to the version of TPM to use. TPM version 2.0 is the only supported version. 2.0 libvirt API driver hw_video_model The video device driver for the display device to use in virtual machine instances. Set to one of the following values to specify the supported driver to use: virtio - (Default) Recommended Driver for the virtual machine display device, supported by most architectures. The VirtIO GPU driver is included in RHEL-7 and later, and Linux kernel versions 4.4 and later. If an instance kernel has the VirtIO GPU driver, then the instance can use all the VirtIO GPU features. If an instance kernel does not have the VirtIO GPU driver, the VirtIO GPU device gracefully falls back to VGA compatibility mode, which provides a working display for the instance. qxl - Deprecated Driver for Spice or noVNC environments that is no longer maintained. cirrus - Legacy driver, supported only for backward compatibility. Do not use for new instances. vga - Use this driver for IBM Power environments. bochs - Use this driver for instances that boot with UEFI. In some cases, you can use this driver for instances that boot with BIOS, such as when the instance does not depend on direct VGA hardware access. gop - Not supported for QEMU/KVM environments. xen - Not supported for KVM environments. vmvga - Legacy driver, do not use. none - Use this value to disable emulated graphics or video in virtual GPU (vGPU) instances where the driver is configured separately. libvirt API driver hw_video_ram Maximum RAM for the video image. Used only if a hw_video:ram_max_mb value has been set in the flavor's extra_specs and that value is higher than the value set in hw_video_ram . Integer in MB (for example, 64 ) libvirt API driver hw_watchdog_action Enables a virtual hardware watchdog device that carries out the specified action if the server hangs. The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If hw_watchdog_action is not specified, the watchdog is disabled. disabled-The device is not attached. Allows the user to disable the watchdog for the image, even if it has been enabled using the image's flavor. The default value for this parameter is disabled. reset-Forcefully reset the guest. poweroff-Forcefully power off the guest. pause-Pause the guest. none-Only enable the watchdog; do nothing if the server hangs. libvirt API driver os_command_line The kernel command line to be used by the libvirt driver, instead of the default. For Linux Containers(LXC), the value is used as arguments for initialization. This key is valid only for Amazon kernel, ramdisk, or machine images (aki, ari, or ami). libvirt API driver os_secure_boot Use to create an instance that is protected with UEFI Secure Boot. Set to one of the following valid values: required : Enables Secure Boot for instances launched with this image. The instance is only launched if the Compute service locates a host that can support Secure Boot. If no host is found, the Compute service returns a "No valid host" error. disabled : Disables Secure Boot for instances launched with this image. Disabled by default. optional : Enables Secure Boot for instances launched with this image only when the Compute service determines that the host can support Secure Boot. libvirt API driver and VMware API driver hw_vif_model Specifies the model of virtual network interface device to use. The valid options depend on the configured hypervisor. KVM and QEMU: e1000, ne2k_pci, pcnet, rtl8139, and virtio. VMware: e1000, e1000e, VirtualE1000, VirtualE1000e, VirtualPCNet32, VirtualSriovEthernetCard, and VirtualVmxnet. Xen: e1000, netfront, ne2k_pci, pcnet, and rtl8139. VMware API driver vmware_adaptertype The virtual SCSI or IDE controller used by the hypervisor. lsiLogic , busLogic , or ide VMware API driver vmware_ostype A VMware GuestID which describes the operating system installed in the image. This value is passed to the hypervisor when creating a virtual machine. If not specified, the key defaults to otherGuest . For more information, see Images with VMware vSphere . VMware API driver vmware_image_version Currently unused. 1 XenAPI driver auto_disk_config If true, the root partition on the disk is automatically resized before the instance boots. This value is only taken into account by the Compute service when using a Xen-based hypervisor with the XenAPI driver. The Compute service will only attempt to resize if there is a single partition on the image, and only if the partition is in ext3 or ext4 format. true / false libvirt API driver and XenAPI driver os_type The operating system installed on the image. The XenAPI driver contains logic that takes different actions depending on the value of the os_type parameter of the image. For example, for os_type=windows images, it creates a FAT32-based swap partition instead of a Linux swap partition, and it limits the injected host name to less than 16 characters. linux or windows
[ "openstack flavor list --os-cloud <cloud_name>", "`export OS_CLOUD=<cloud_name>`", "export DIB_LOCAL_IMAGE=rhel-<ver>-x86_64-kvm.qcow2", "export REG_USER='<username>' export REG_PASSWORD='<password>' export REG_AUTO_ATTACH=true export REG_METHOD=portal export https_proxy='<IP_address:port>' (if applicable) export http_proxy='<IP_address:port>' (if applicable)", "export REG_USER='<username>' export REG_PASSWORD='<password>' export REG_SAT_URL='<satellite-url>' export REG_ORG='<satellite-org>' export REG_ENV='<satellite-env>' export REG_METHOD=<method>", "export DIB_YUM_REPO_CONF=<file-path>", "export DIB_RELEASE=<ver> disk-image-create rhel baremetal -o rhel-image", "KERNEL_ID=USD(openstack image create --file rhel-image.vmlinuz --public --container-format aki --disk-format aki -f value -c id rhel-image.vmlinuz) RAMDISK_ID=USD(openstack image create --file rhel-image.initrd --public --container-format ari --disk-format ari -f value -c id rhel-image.initrd) openstack image create --file rhel-image.qcow2 --public --container-format bare --disk-format qcow2 --property kernel_id=USDKERNEL_ID --property ramdisk_id=USDRAMDISK_ID rhel-root-partition-bare-metal-image", "export DIB_LOCAL_IMAGE=rhel-<ver>-x86_64-kvm.qcow2", "export REG_USER='<username>' export REG_PASSWORD='<password>' export REG_AUTO_ATTACH=true export REG_METHOD=portal export https_proxy='<IP_address:port>' (if applicable) export http_proxy='<IP_address:port>' (if applicable)", "export REG_USER='<username>' export REG_PASSWORD='<password>' export REG_SAT_URL='<satellite-url>' export REG_ORG='<satellite-org>' export REG_ENV='<satellite-env>' export REG_METHOD=<method>", "export DIB_YUM_REPO_CONF=<file-path>", "openstack image create --file rhel-image.qcow2 --public --container-format bare --disk-format qcow2 rhel-whole-disk-bare-metal-image", "sudo subscription-manager repos --enable=advanced-virt-for-rhel-<ver>-x86_64-rpms", "sudo dnf module install -y virt", "sudo dnf install -y libguestfs-tools-c", "sudo systemctl disable --now iscsid.socket", "virt-install --virt-type kvm --name <rhel9-cloud-image> --ram <2048> --cdrom </var/lib/libvirt/images/rhel-9.0-x86_64-dvd.iso> --disk <rhel9.qcow2>,format=qcow2,size=<10> --network=bridge:virbr0 --graphics vnc,listen=127.0.0.1 --noautoconsole --os-variant=<rhel9.0>", "virt-viewer <rhel9-cloud-image>", "TYPE=Ethernet DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp NM_CONTROLLED=no", "sudo subscription-manager register sudo subscription-manager attach --pool=<pool-id> sudo subscription-manager repos --enable rhel-9-for-x86_64-baseos-rpms --enable rhel-9-for-x86_64-appstream-rpms", "dnf -y update", "dnf install -y cloud-utils-growpart cloud-init", "- resolv-conf", "NOZEROCONF=yes", "GRUB_CMDLINE_LINUX_DEFAULT=\"console=tty0 console=ttyS0,115200n8\"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "Generating grub configuration file Found linux image: /boot/vmlinuz-3.10.0-229.9.2.el9.x86_64 Found initrd image: /boot/initramfs-3.10.0-229.9.2.el9.x86_64.img Found linux image: /boot/vmlinuz-3.10.0-121.el9.x86_64 Found initrd image: /boot/initramfs-3.10.0-121.el9.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-b82a3044fb384a3f9aeacf883474428b Found initrd image: /boot/initramfs-0-rescue-b82a3044fb384a3f9aeacf883474428b.img done", "subscription-manager repos --disable=* subscription-manager unregister dnf clean all", "poweroff", "virt-sysprep -d <rhel9-cloud-image>", "virt-sparsify --compress <rhel9.qcow2> <rhel9-cloud.qcow2>", "virt-install --name=<windows-image> --disk size=<size> --cdrom=<file-path-to-windows-iso-file> --os-type=windows --network=bridge:virbr0 --graphics spice --ram=<ram>", "--disk path=<file-name>,size=<size>", "virt-viewer <windows-image>", "openstack image create --file <base_image_file> --container-format <container_format> --disk-format <disk_format> uefi_secure_boot_image", "openstack image set --property hw_machine_type=q35 uefi_secure_boot_image", "openstack image set --property hw_firmware_type=uefi --property os_secure_boot=required uefi_secure_boot_image", "openstack image create --name <name> --is-public true --disk-format <qcow2> --container-format <bare> --file </path/to/image> --property <os_version>=<11.10>", "glance image-create-via-import --container-format <container_format> --disk-format <disk_format> --name <name> --import-method web-download --uri <uri>", "openstack image show <image-id>", "glance image-create-via-import --container-format <container-format> --disk-format <disk-format> --name <name> --file </path/to/image>", "openstack image show <image-id>", "glance image-create-via-import --disk-format <qcow2> --container-format <bare> --name <name> --visibility public --import-method web-download --uri __<http://server/image.qcow2>__", "glance image-create-via-import --disk-format <qcow2> --container-format <bare> --name <name> --visibility public --file <local_file.qcow2>", "qemu-img info <image_id>.qcow2", "qemu-img convert -p -f qcow2 -O raw <image_id>.qcow2 <image_id>.raw", "glance image-create-via-import --disk-format qcow2 --container-format bare --name <name> --visibility public --import-method web-download --uri <http://server/image.qcow2>", "openstack image set <image-id> --property <architecture>=<x86_64>", "openstack image set <image_id> --hidden 'true'", "openstack image set <image_id> --hidden 'false'", "openstack image list --hidden 'true'", "openstack image delete <image-id> [<image-id> ...]", "glance image-create-via-import --container-format bare --name <image-name> --import-method web-download --uri <uri> --store <store>", "openstack image show <image-id> | grep stores", "glance image-create-via-import --container-format bare --name <image-name> --import-method web-download --uri <uri> --stores <store-1>,<store-2>,<store-3>", "glance image-create-via-import --container-format bare --name <image-name> --import-method web-download --uri <uri> --stores <store-1>,<store-2>,<store-3>", "openstack image show <image-id> | grep stores", "openstack image show <image-id>", "| os_glance_failed_import | | os_glance_importing_to_stores | central,dcn0,dcn1 | status | importing", "watch openstack image show <image-id>", "| os_glance_failed_import | | os_glance_importing_to_stores | dcn0,dcn1 | status | importing", "| os_glance_failed_import | dcn0 | os_glance_importing_to_stores | dcn1 | status | importing", "| os_glance_failed_import | dcn0 | os_glance_importing_to_stores | | status | active", "openstack image import <image_id> --store <store_id> --import-method copy-image", "openstack image import <image-id> --stores <store-1>,<store-2> --import-method copy-image", "openstack image list --include-stores", "openstack image import <image-id> --all-stores true --import-method copy-image", "openstack image list --include-stores", "openstack image delete --store <store-id> <image-id>", "openstack image show ID | grep \"stores\" | stores | default_backend,dcn1,dcn2", "openstack image list --include-stores | ID | Name | Stores | 2bd882e7-1da0-4078-97fe-f1bb81f61b00 | cirros | default_backend,dcn1,dcn2", "openstack image show ID -c properties | properties | (--- cut ---) locations='[{'url': 'rbd://79b70c32-df46-4741-93c0-8118ae2ae284/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://63df2767-8ddb-4e06-8186-8c155334f487/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'dcn1'}}, {'url': 'rbd://1b324138-2ef9-4ef9-bd9e-aa7e6d6ead78/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'dcn2'}}]', (--- cut --)", "rbd://79b70c32-df46-4741-93c0-8118ae2ae284/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'default_backend'}}", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: glance: template: databaseInstance: openstack keystoneEndpoint: default glanceAPIs: default: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_endpoint_type = internalURL swift_store_user = service:glance swift_store_key = {{ .ServicePassword }} preserveJobs: false replicas: 3 default1: type: single replicas: 1 storage: storageRequest: 10G", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack", "oc -n openstack get oscp USD(oc get oscp -o custom-columns=NAME:.metadata.name --no-headers) -o jsonpath='{.spec.glance.template.glanceAPIs}' | jq", "oc -n openstack get oscp USD(oc get oscp -o custom-columns=NAME:.metadata.name --no-headers) -o jsonpath='{.spec.glance.template.keystoneEndpoint}'", "oc exec -it openstackclient bash -- openstack endpoint list | grep image", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: glance: template: keystoneEndpoint: default1 glanceAPIs: default1: type: single replicas: 1 storage: storageRequest: 10G", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/performing_storage_operations/assembly_glance-performing-operations-with-the-image-service_using-backup-service
Chapter 71. Kubernetes Secrets
Chapter 71. Kubernetes Secrets Since Camel 2.17 Only producer is supported The Kubernetes Secrets component is one of the Kubernetes Components which provides a producer to execute Kubernetes Secrets operations. 71.1. Dependencies When using kubernetes-secrets with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 71.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 71.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 71.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 71.3. Component Options The Kubernetes Secrets component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 71.4. Endpoint Options The Kubernetes Secrets endpoint is configured using URI syntax: with the following path and query parameters: 71.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 71.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 71.5. Message Headers The Kubernetes Secrets component supports 5 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesSecretsLabels (producer) Constant: KUBERNETES_SECRETS_LABELS The secret labels. Map CamelKubernetesSecretName (producer) Constant: KUBERNETES_SECRET_NAME The secret name. String CamelKubernetesSecret (producer) Constant: KUBERNETES_SECRET A secret object. Secret 71.6. Supported producer operation listSecrets listSecretsByLabels getSecret createSecret updateSecret deleteSecret 71.7. Kubernetes Secrets Producer Examples listSecrets: this operation list the secrets on a kubernetes cluster. from("direct:list"). toF("kubernetes-secrets:///?kubernetesClient=#kubernetesClient&operation=listSecrets"). to("mock:result"); This operation returns a List of secrets from your cluster. listSecretsByLabels: this operation list the Secrets by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SECRETS_LABELS, labels); } }); toF("kubernetes-secrets:///?kubernetesClient=#kubernetesClient&operation=listSecretsByLabels"). to("mock:result"); This operation returns a List of Secrets from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 71.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-secrets:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-secrets:///?kubernetesClient=#kubernetesClient&operation=listSecrets\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SECRETS_LABELS, labels); } }); toF(\"kubernetes-secrets:///?kubernetesClient=#kubernetesClient&operation=listSecretsByLabels\"). to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-secrets-component-starter
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ Streams is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the AMQ Streams for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ Streams product. The Software Downloads page opens. Click the Download link for your component. Revised on 2022-11-22 11:34:30 UTC
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/getting_started_with_amq_streams_on_openshift/using_your_subscription
Chapter 6. Setting up metrics and dashboards for AMQ Streams
Chapter 6. Setting up metrics and dashboards for AMQ Streams You can monitor your AMQ Streams deployment by viewing key metrics on dashboards and setting up alerts that trigger under certain conditions. Metrics are available for Kafka, ZooKeeper, and the other components of AMQ Streams. To provide metrics information, AMQ Streams uses Prometheus rules and Grafana dashboards. When configured with a set of rules for each component of AMQ Streams, Prometheus consumes key metrics from the pods that are running in your cluster. Grafana then visualizes those metrics on dashboards. AMQ Streams includes example Grafana dashboards that you can customize to suit your deployment. On OpenShift Container Platform 4.x, AMQ Streams employs monitoring for user-defined projects (an OpenShift feature) to simplify the Prometheus setup process. On OpenShift Container Platform 3.11, you need to deploy the Prometheus and Alertmanager components to your cluster separately. Regardless of your OpenShift Container Platform version, you have to start by deploying the Prometheus metrics configuration for AMQ Streams. , follow the instructions for your OpenShift Container Platform version: Section 6.3, "Viewing Kafka metrics and dashboards in OpenShift 4" Section 6.4, "Viewing Kafka metrics and dashboards in OpenShift 3.11" With Prometheus and Grafana set up, you can use the example Grafana dashboards and alerting rules to monitor your Kafka cluster. Additional monitoring options Kafka Exporter is an optional component that provides additional monitoring related to consumer lag. If you want to use Kafka Exporter with AMQ Streams, see Configure the Kafka resource to deploy Kafka Exporter with your Kafka cluster . You can also configure your deployment to track messages end-to-end by setting up distributed tracing. For more information, see Distributed tracing in the Using AMQ Streams on OpenShift guide. Additional resources Prometheus documentation Grafana documentation Apache Kafka Monitoring in the Kafka documentation describes JMX metrics exposed by Apache Kafka ZooKeeper JMX in the ZooKeeper documentation describes JMX metrics exposed by Apache ZooKeeper 6.1. Example metrics files You can find example Grafana dashboards and other metrics configuration files in the examples/metrics directory . As indicated in the following list, some files are only used with OpenShift Container Platform 3.11, and not with OpenShift Container Platform 4.x. Example metrics files provided with AMQ Streams 1 Example Grafana dashboards. 2 Installation file for the Grafana image. 3 OPENSHIFT 3.11 ONLY : Additional Prometheus configuration to scrape metrics for CPU, memory, and disk volume usage, which comes directly from the OpenShift cAdvisor agent and kubelet on the nodes. 4 Hook definitions for sending notifications through Alertmanager. 5 OPENSHIFT 3.11 ONLY : Resources for deploying and configuring Alertmanager. 6 Alerting rules examples for use with Prometheus Alertmanager. 7 OPENSHIFT 3.11 ONLY : Installation resource file for the Prometheus image. 8 PodMonitor definitions translated by the Prometheus Operator into jobs for the Prometheus server to be able to scrape metrics data directly from pods. 9 Kafka Bridge resource with metrics enabled. 10 Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka Connect. 11 Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Cruise Control. 12 Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka and ZooKeeper. 13 Metrics configuration that defines Prometheus JMX Exporter relabeling rules for Kafka Mirror Maker 2.0. 6.1.1. Example Grafana dashboards Example Grafana dashboards are provided for monitoring the following resources: AMQ Streams Kafka Shows metrics for: Brokers online count Active controllers in the cluster count Unclean leader election rate Replicas that are online Under-replicated partitions count Partitions which are at their minimum in sync replica count Partitions which are under their minimum in sync replica count Partitions that do not have an active leader and are hence not writable or readable Kafka broker pods memory usage Aggregated Kafka broker pods CPU usage Kafka broker pods disk usage JVM memory used JVM garbage collection time JVM garbage collection count Total incoming byte rate Total outgoing byte rate Incoming messages rate Total produce request rate Byte rate Produce request rate Fetch request rate Network processor average time idle percentage Request handler average time idle percentage Log size AMQ Streams ZooKeeper Shows metrics for: Quorum Size of Zookeeper ensemble Number of alive connections Queued requests in the server count Watchers count ZooKeeper pods memory usage Aggregated ZooKeeper pods CPU usage ZooKeeper pods disk usage JVM memory used JVM garbage collection time JVM garbage collection count Amount of time it takes for the server to respond to a client request (maximum, minimum and average) AMQ Streams Kafka Connect Shows metrics for: Total incoming byte rate Total outgoing byte rate Disk usage JVM memory used JVM garbage collection time AMQ Streams Kafka MirrorMaker 2 Shows metrics for: Number of connectors Number of tasks Total incoming byte rate Total outgoing byte rate Disk usage JVM memory used JVM garbage collection time AMQ Streams Operators Shows metrics for: Custom resources Successful custom resource reconciliations per hour Failed custom resource reconciliations per hour Reconciliations without locks per hour Reconciliations started hour Periodical reconciliations per hour Maximum reconciliation time Average reconciliation time JVM memory used JVM garbage collection time JVM garbage collection count Dashboards are also provided for the Kafka Bridge and Cruise Control components of AMQ Streams. All the dashboards provide JVM metrics, as well as metrics that are specific to each component. For example, the Operators dashboard provides information on the number of reconciliations or custom resources that are being processed. 6.1.2. Example Prometheus metrics configuration AMQ Streams uses the Prometheus JMX Exporter to expose JMX metrics using an HTTP endpoint, which is then scraped by Prometheus. Grafana dashboards are dependent on Prometheus JMX Exporter relabeling rules, which are defined for AMQ Streams components as custom resource configuration. A label is a name-value pair. Relabeling is the process of writing a label dynamically. For example, the value of a label might be derived from the name of a Kafka server and client ID. AMQ Streams provides example custom resource configuration YAML files with the relabeling rules already defined. When deploying Prometheus metrics configuration, you can deploy the example custom resources or copy the metrics configuration to your own custom resource definitions. Table 6.1. Example custom resources with metrics configuration Component Custom resource Example YAML file Kafka and ZooKeeper Kafka kafka-metrics.yaml Kafka Connect KafkaConnect and KafkaConnectS2I kafka-connect-metrics.yaml Kafka MirrorMaker 2.0 KafkaMirrorMaker2 kafka-mirror-maker-2-metrics.yaml Kafka Bridge KafkaBridge kafka-bridge-metrics.yaml Cruise Control Kafka kafka-cruise-control-metrics.yaml Additional resources Section 6.2, "Deploying Prometheus metrics configuration" For more information on the use of relabeling, see Configuration in the Prometheus documentation. 6.2. Deploying Prometheus metrics configuration AMQ Streams provides example custom resource configuration YAML files with relabeling rules. To apply metrics configuration of relabeling rules, do one of the following: Copy the example configuration to your own custom resource definition Deploy the custom resource with the metrics configuration 6.2.1. Copying Prometheus metrics configuration to a custom resource To use Grafana dashboards for monitoring, copy the example metrics configuration to a custom resource . In this procedure, the Kafka resource is updated, but the procedure is the same for all components that support monitoring. Procedure Perform the following steps for each Kafka resource in your deployment. Update the Kafka resource in an editor. oc edit kafka KAFKA-CONFIG-FILE Copy the example configuration in kafka-metrics.yaml to your own Kafka resource definition. Save the file, and wait for the updated resource to be reconciled. 6.2.2. Deploying a Kafka cluster with Prometheus metrics configuration To use Grafana dashboards for monitoring, you can deploy an example Kafka cluster with metrics configuration . In this procedure, The kafka-metrics.yaml file is used for the Kafka resource. Procedure Deploy the Kafka cluster with the example metrics configuration . oc apply -f kafka-metrics.yaml 6.3. Viewing Kafka metrics and dashboards in OpenShift 4 When AMQ Streams is deployed to OpenShift Container Platform 4.x, metrics are provided through monitoring for user-defined projects . This OpenShift feature gives developers access to a separate Prometheus instance for monitoring their own projects (for example, a Kafka project). If monitoring for user-defined projects is enabled, the openshift-user-workload-monitoring project contains the following components: A Prometheus Operator A Prometheus instance (automatically deployed by the Prometheus Operator) A Thanos Ruler instance AMQ Streams uses these components to consume metrics. A cluster administrator must enable monitoring for user-defined projects and then grant developers and other users permission to monitor applications within their own projects. Grafana deployment You can deploy a Grafana instance to the project containing your Kafka cluster. The example Grafana dashboards can then be used to visualize Prometheus metrics for AMQ Streams in the Grafana user interface. Important The openshift-monitoring project provides monitoring for core platform components. Do not use the Prometheus and Grafana components in this project to configure monitoring for AMQ Streams on OpenShift Container Platform 4.x. Grafana version 6.3 is the minimum supported version. Prerequisites You have deployed the Prometheus metrics configuration using the example YAML files. Monitoring for user-defined projects is enabled. A cluster administrator must have created the cluster-monitoring-config ConfigMap in your OpenShift Container Platform cluster. For more information, see the following resources: Enabling monitoring for user-defined projects in OpenShift Container Platform 4.6. Enabling monitoring of your own services in OpenShift Container Platform 4.5. To monitor user-defined projects, you must have been assigned the monitoring-rules-edit or monitoring-edit role by a cluster administrator. See: Granting users permission to monitor user-defined projects in OpenShift Container Platform 4.6. Granting user permissions using web console in OpenShift Container Platform 4.5. Procedure outline To set up AMQ Streams monitoring in OpenShift Container Platform 4.x, follow these procedures in order: Prerequisite: Deploy the Prometheus metrics configuration Deploy the Prometheus resources Create a Service Account for Grafana Deploy Grafana with a Prometheus datasource Create a Route to the Grafana Service Import the example Grafana dashboards 6.3.1. Deploying the Prometheus resources Note Use this procedure when running AMQ Streams on OpenShift Container Platform 4.x. To enable Prometheus to consume Kafka metrics, you configure and deploy the PodMonitor resources in the example metrics files. The PodMonitors scrape data directly from pods for Apache Kafka, ZooKeeper, Operators, the Kafka Bridge, and Cruise Control. Then, you deploy the example alerting rules for Alertmanager. Prerequisites A running Kafka cluster. Check the example alerting rules provided with AMQ Streams. Procedure Check that monitoring for user-defined projects is enabled: oc get pods -n openshift-user-workload-monitoring If enabled, pods for the monitoring components are returned. For example: NAME READY STATUS RESTARTS AGE prometheus-operator-5cc59f9bc6-kgcq8 1/1 Running 0 25s prometheus-user-workload-0 5/5 Running 1 14s prometheus-user-workload-1 5/5 Running 1 14s thanos-ruler-user-workload-0 3/3 Running 0 14s thanos-ruler-user-workload-1 3/3 Running 0 14s If no pods are returned, monitoring for user-defined projects is disabled. See the Prerequisites in Section 6.3, "Viewing Kafka metrics and dashboards in OpenShift 4" . Multiple PodMonitor resources are defined in examples/metrics/prometheus-install/strimzi-pod-monitor.yaml . For each PodMonitor resource, edit the spec.namespaceSelector.matchNames property: apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: cluster-operator-metrics labels: app: strimzi spec: selector: matchLabels: strimzi.io/kind: cluster-operator namespaceSelector: matchNames: - PROJECT-NAME 1 podMetricsEndpoints: - path: /metrics port: http # ... 1 The project where the pods to scrape the metrics from are running, for example, Kafka . Deploy the strimzi-pod-monitor.yaml file to the project where your Kafka cluster is running: oc apply -f strimzi-pod-monitor.yaml -n MY-PROJECT Deploy the example Prometheus rules to the same project: oc apply -f prometheus-rules.yaml -n MY-PROJECT Additional resources The Monitoring guide for OpenShift Container Platform 4.6 Section 6.4.3.3, "Alerting rule examples" 6.3.2. Creating a Service Account for Grafana Note Use this procedure when running AMQ Streams on OpenShift Container Platform 4.x. Your Grafana instance for AMQ Streams needs to run with a Service Account that is assigned the cluster-monitoring-view role. Prerequisites Deploy the Prometheus resources Procedure Create a ServiceAccount for Grafana. Here the resource is named grafana-serviceaccount . apiVersion: v1 kind: ServiceAccount metadata: name: grafana-serviceaccount labels: app: strimzi Deploy the ServiceAccount to the project containing your Kafka cluster: oc apply -f GRAFANA-SERVICEACCOUNT -n MY-PROJECT Create a ClusterRoleBinding resource that assigns the cluster-monitoring-view role to the Grafana ServiceAccount . Here the resource is named grafana-cluster-monitoring-binding . apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: grafana-cluster-monitoring-binding labels: app: strimzi subjects: - kind: ServiceAccount name: grafana-serviceaccount namespace: MY-PROJECT 1 roleRef: kind: ClusterRole name: cluster-monitoring-view apiGroup: rbac.authorization.k8s.io 1 Name of your project. Deploy the ClusterRoleBinding to the project containing your Kafka cluster: oc apply -f GRAFANA-CLUSTER-MONITORING-BINDING -n MY-PROJECT Additional resources Section 6.3, "Viewing Kafka metrics and dashboards in OpenShift 4" 6.3.3. Deploying Grafana with a Prometheus datasource Note Use this procedure when running AMQ Streams on OpenShift Container Platform 4.x. This procedure describes how to deploy a Grafana application that is configured for the OpenShift Container Platform 4.x monitoring stack. OpenShift Container Platform 4.x includes a Thanos Querier instance in the openshift-monitoring project. Thanos Querier is used to aggregate platform metrics. To consume the required platform metrics, your Grafana instance requires a Prometheus data source that can connect to Thanos Querier. To configure this connection, you create a Config Map that authenticates, by using a token, to the oauth-proxy sidecar that runs alongside Thanos Querier. A datasource.yaml file is used as the source of the Config Map. Finally, you deploy the Grafana application with the Config Map mounted as a volume to the project containing your Kafka cluster. Prerequisites Deploy the Prometheus resources Create a Service Account for Grafana Procedure Get the access token of the Grafana ServiceAccount : oc serviceaccounts get-token grafana-serviceaccount -n MY-PROJECT Copy the access token to use in the step. Create a datasource.yaml file containing the Thanos Querier configuration for Grafana. Paste the access token into the httpHeaderValue1 property as indicated. apiVersion: 1 datasources: - name: Prometheus type: prometheus url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 access: proxy basicAuth: false withCredentials: false isDefault: true jsonData: timeInterval: 5s tlsSkipVerify: true httpHeaderName1: "Authorization" secureJsonData: httpHeaderValue1: "Bearer USD{ GRAFANA-ACCESS-TOKEN }" 1 editable: true 1 GRAFANA-ACCESS-TOKEN : The value of the access token for the Grafana ServiceAccount . Create a Config Map named grafana-config from the datasource.yaml file: oc create configmap grafana-config --from-file=datasource.yaml -n MY-PROJECT Create a Grafana application consisting of a Deployment and a Service . The grafana-config Config Map is mounted as a volume for the datasource configuration. apiVersion: apps/v1 kind: Deployment metadata: name: grafana labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: grafana template: metadata: labels: name: grafana spec: serviceAccountName: grafana-serviceaccount containers: - name: grafana image: grafana/grafana:6.3.0 ports: - name: grafana containerPort: 3000 protocol: TCP volumeMounts: - name: grafana-data mountPath: /var/lib/grafana - name: grafana-logs mountPath: /var/log/grafana - name: grafana-config mountPath: /etc/grafana/provisioning/datasources/datasource.yaml readOnly: true subPath: datasource.yaml readinessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 15 periodSeconds: 20 volumes: - name: grafana-data emptyDir: {} - name: grafana-logs emptyDir: {} - name: grafana-config configMap: name: grafana-config --- apiVersion: v1 kind: Service metadata: name: grafana labels: app: strimzi spec: ports: - name: grafana port: 3000 targetPort: 3000 protocol: TCP selector: name: grafana type: ClusterIP Deploy the Grafana application to the project containing your Kafka cluster: oc apply -f GRAFANA-APPLICATION -n MY-PROJECT Additional resources Section 6.3, "Viewing Kafka metrics and dashboards in OpenShift 4" The Monitoring guide for OpenShift Container Platform 4.6 6.3.4. Creating a Route to the Grafana Service Note Use this procedure when running AMQ Streams on OpenShift Container Platform 4.x. You can access the Grafana user interface through a Route that exposes the Grafana Service. Prerequisites Deploy the Prometheus resources Create a Service Account for Grafana Deploy Grafana with a Prometheus datasource Procedure Create an edge route to the grafana service: oc create route edge MY-GRAFANA-ROUTE --service=grafana --namespace= KAFKA-NAMESPACE Additional resources Section 6.3, "Viewing Kafka metrics and dashboards in OpenShift 4" 6.3.5. Importing the example Grafana dashboards Note Use this procedure when running AMQ Streams on OpenShift Container Platform 4.x. Import the example Grafana dashboards using the Grafana user interface. Prerequisites Deploy the Prometheus resources Create a Service Account for Grafana Deploy Grafana with a Prometheus datasource Create a Route to the Grafana Service Procedure Get the details of the Route to the Grafana Service. For example: oc get routes NAME HOST/PORT PATH SERVICES MY-GRAFANA-ROUTE MY-GRAFANA-ROUTE-amq-streams.net grafana In a web browser, access the Grafana login screen using the URL for the Route host and port. Enter your user name and password, and then click Log In . The default Grafana user name and password are both admin . After logging in for the first time, you can change the password. In Configuration > Data Sources , check that the Prometheus data source was created. The data source was created in Section 6.3.3, "Deploying Grafana with a Prometheus datasource" . Click Dashboards > Manage , and then click Import . In examples/metrics/grafana-dashboards , copy the JSON of the dashboard to import. Paste the JSON into the text box, and then click Load . Repeat steps 1 -7 for the other example Grafana dashboards. The imported Grafana dashboards are available to view from the Dashboards home page. Additional resources Section 6.3.4, "Creating a Route to the Grafana Service" Section 6.3, "Viewing Kafka metrics and dashboards in OpenShift 4" 6.4. Viewing Kafka metrics and dashboards in OpenShift 3.11 When AMQ Streams is deployed to OpenShift Container Platform 3.11, you can use Prometheus to provide monitoring data for the example Grafana dashboards provided with AMQ Streams. You need to manually deploy the Prometheus components to your cluster. In order to run the example Grafana dashboards, you must: Add metrics configuration to your Kafka cluster resource Deploy Prometheus and Prometheus Alertmanager Deploy Grafana Note The resources referenced in this section are intended as a starting point for setting up monitoring, but they are provided as examples only. If you require further support on configuring and running Prometheus or Grafana in production, try reaching out to their respective communities. 6.4.1. Prometheus support The Prometheus server is not supported when AMQ Streams is deployed to OpenShift Container Platform 3.11. However, the Prometheus endpoint and the Prometheus JMX Exporter used to expose the metrics are supported. For your convenience, we supply detailed instructions and example metrics configuration files should you wish to use Prometheus for monitoring. 6.4.2. Setting up Prometheus Note Use these procedures when running AMQ Streams on OpenShift Container Platform 3.11. Prometheus provides an open source set of components for systems monitoring and alert notification. Here we describe how to use the provided Prometheus image and configuration files to run and manage a Prometheus server when AMQ Streams is deployed to OpenShift Container Platform 3.11. Prerequisites You have deployed compatible versions of Prometheus and Grafana to your OpenShift Container Platform 3.11 cluster. The service account used for running the Prometheus server pod has access to the OpenShift API server. This allows the service account to retrieve the list of pods in the cluster from which it gets metrics. For more information, see Discovering services . 6.4.2.1. Prometheus configuration AMQ Streams provides example configuration files for the Prometheus server . A Prometheus image is provided for deployment: prometheus.yaml Additional Prometheus-related configuration is also provided in the following files: prometheus-additional.yaml prometheus-rules.yaml strimzi-pod-monitor.yaml For Prometheus to obtain monitoring data, you must have deployed a compatible version of Prometheus to your OpenShift Container Platform 3.11 cluster. Then, use the configuration files to Deploy Prometheus . 6.4.2.2. Prometheus resources When you apply the Prometheus configuration, the following resources are created in your OpenShift cluster and managed by the Prometheus Operator: A ClusterRole that grants permissions to Prometheus to read the health endpoints exposed by the Kafka and ZooKeeper pods, cAdvisor and the kubelet for container metrics. A ServiceAccount for the Prometheus pods to run under. A ClusterRoleBinding which binds the ClusterRole to the ServiceAccount . A Deployment to manage the Prometheus Operator pod. A PodMonitor to manage the configuration of the Prometheus pod. A Prometheus to manage the configuration of the Prometheus pod. A PrometheusRule to manage alerting rules for the Prometheus pod. A Secret to manage additional Prometheus settings. A Service to allow applications running in the cluster to connect to Prometheus (for example, Grafana using Prometheus as datasource). 6.4.2.3. Deploying Prometheus To obtain monitoring data in your Kafka cluster, you can use your own Prometheus deployment or deploy Prometheus by applying the example installation resource file for the Prometheus docker image and the YAML files for Prometheus-related resources . The deployment process creates a ClusterRoleBinding and discovers an Alertmanager instance in the namespace specified for the deployment. Prerequisites Check the example alerting rules provided Procedure Modify the Prometheus installation file ( prometheus.yaml ) according to the namespace Prometheus is going to be installed into: On Linux, use: sed -i 's/namespace: .*/namespace: my-namespace /' prometheus.yaml On MacOS, use: sed -i '' 's/namespace: .*/namespace: my-namespace /' prometheus.yaml Edit the PodMonitor resource in strimzi-pod-monitor.yaml to define Prometheus jobs that will scrape the metrics data from pods. Update the namespaceSelector.matchNames property with the namespace where the pods to scrape the metrics from are running. PodMonitor is used to scrape data directly from pods for Apache Kafka, ZooKeeper, Operators, the Kafka Bridge and Cruise Control. Edit the prometheus.yaml installation file to include additional configuration for scraping metrics directly from nodes. The Grafana dashboards provided show metrics for CPU, memory and disk volume usage, which come directly from the OpenShift cAdvisor agent and kubelet on the nodes. Create a Secret resource from the configuration file ( prometheus-additional.yaml in the examples/metrics/prometheus-additional-properties directory): oc apply -f prometheus-additional.yaml Edit the additionalScrapeConfigs property in the prometheus.yaml file to include the name of the Secret and the prometheus-additional.yaml file. Deploy the Prometheus resources: oc apply -f strimzi-pod-monitor.yaml oc apply -f prometheus-rules.yaml oc apply -f prometheus.yaml 6.4.3. Setting up Prometheus Alertmanager Prometheus Alertmanager is a plugin for handling alerts and routing them to a notification service. Alertmanager supports an essential aspect of monitoring, which is to be notified of conditions that indicate potential issues based on alerting rules. 6.4.3.1. Alertmanager configuration AMQ Streams provides example configuration files for Prometheus Alertmanager . A configuration file defines the resources for deploying Alertmanager: alert-manager.yaml An additional configuration file provides the hook definitions for sending notifications from your Kafka cluster. alert-manager-config.yaml For Alertmanger to handle Prometheus alerts, use the configuration files to: Deploy Alertmanager 6.4.3.2. Alerting rules Alerting rules provide notifications about specific conditions observed in the metrics. Rules are declared on the Prometheus server, but Prometheus Alertmanager is responsible for alert notifications. Prometheus alerting rules describe conditions using PromQL expressions that are continuously evaluated. When an alert expression becomes true, the condition is met and the Prometheus server sends alert data to the Alertmanager. Alertmanager then sends out a notification using the communication method configured for its deployment. Alertmanager can be configured to use email, chat messages or other notification methods. Additional resources For more information about setting up alerting rules, see Configuration in the Prometheus documentation. 6.4.3.3. Alerting rule examples Example alerting rules for Kafka and ZooKeeper metrics are provided with AMQ Streams for use in a Prometheus deployment . General points about the alerting rule definitions: A for property is used with the rules to determine the period of time a condition must persist before an alert is triggered. A tick is a basic ZooKeeper time unit, which is measured in milliseconds and configured using the tickTime parameter of Kafka.spec.zookeeper.config . For example, if ZooKeeper tickTime=3000 , 3 ticks (3 x 3000) equals 9000 milliseconds. The availability of the ZookeeperRunningOutOfSpace metric and alert is dependent on the OpenShift configuration and storage implementation used. Storage implementations for certain platforms may not be able to supply the information on available space required for the metric to provide an alert. Kafka alerting rules UnderReplicatedPartitions Gives the number of partitions for which the current broker is the lead replica but which have fewer replicas than the min.insync.replicas configured for their topic. This metric provides insights about brokers that host the follower replicas. Those followers are not keeping up with the leader. Reasons for this could include being (or having been) offline, and over-throttled interbroker replication. An alert is raised when this value is greater than zero, providing information on the under-replicated partitions for each broker. AbnormalControllerState Indicates whether the current broker is the controller for the cluster. The metric can be 0 or 1. During the life of a cluster, only one broker should be the controller and the cluster always needs to have an active controller. Having two or more brokers saying that they are controllers indicates a problem. If the condition persists, an alert is raised when the sum of all the values for this metric on all brokers is not equal to 1, meaning that there is no active controller (the sum is 0) or more than one controller (the sum is greater than 1). UnderMinIsrPartitionCount Indicates that the minimum number of in-sync replicas (ISRs) for a lead Kafka broker, specified using min.insync.replicas , that must acknowledge a write operation has not been reached. The metric defines the number of partitions that the broker leads for which the in-sync replicas count is less than the minimum in-sync. An alert is raised when this value is greater than zero, providing information on the partition count for each broker that did not achieve the minimum number of acknowledgments. OfflineLogDirectoryCount Indicates the number of log directories which are offline (for example, due to a hardware failure) so that the broker cannot store incoming messages anymore. An alert is raised when this value is greater than zero, providing information on the number of offline log directories for each broker. KafkaRunningOutOfSpace Indicates the remaining amount of disk space that can be used for writing data. An alert is raised when this value is lower than 5GiB, providing information on the disk that is running out of space for each persistent volume claim. The threshold value may be changed in prometheus-rules.yaml . ZooKeeper alerting rules AvgRequestLatency Indicates the amount of time it takes for the server to respond to a client request. An alert is raised when this value is greater than 10 (ticks), providing the actual value of the average request latency for each server. OutstandingRequests Indicates the number of queued requests in the server. This value goes up when the server receives more requests than it can process. An alert is raised when this value is greater than 10, providing the actual number of outstanding requests for each server. ZookeeperRunningOutOfSpace Indicates the remaining amount of disk space that can be used for writing data to ZooKeeper. An alert is raised when this value is lower than 5GiB., providing information on the disk that is running out of space for each persistent volume claim. 6.4.3.4. Deploying Alertmanager To deploy Alertmanager, apply the example configuration files . The sample configuration provided with AMQ Streams configures the Alertmanager to send notifications to a Slack channel. The following resources are defined on deployment: An Alertmanager to manage the Alertmanager pod. A Secret to manage the configuration of the Alertmanager. A Service to provide an easy to reference hostname for other services to connect to Alertmanager (such as Prometheus). Prerequisites Metrics are configured for the Kafka cluster resource Prometheus is deployed Procedure Create a Secret resource from the Alertmanager configuration file ( alert-manager-config.yaml ): oc create secret generic alertmanager-alertmanager --from-file=alertmanager.yaml=alert-manager-config.yaml Update the alert-manager-config.yaml file to replace the: slack_api_url property with the actual value of the Slack API URL related to the application for the Slack workspace channel property with the actual Slack channel on which to send notifications Deploy Alertmanager: oc apply -f alert-manager.yaml 6.4.4. Setting up Grafana Grafana provides visualizations of Prometheus metrics. You can deploy and enable the example Grafana dashboards provided with AMQ Streams. 6.4.4.1. Deploying Grafana To provide visualizations of Prometheus metrics, you can use your own Grafana installation or deploy Grafana by applying the grafana.yaml file provided in the examples/metrics directory. Prerequisites Metrics are configured for the Kafka cluster resource Prometheus and Prometheus Alertmanager are deployed Procedure Deploy Grafana: oc apply -f grafana.yaml Enable the Grafana dashboards . 6.4.4.2. Enabling the example Grafana dashboards AMQ Streams provides example dashboard configuration files for Grafana . Example dashboards are provided in the examples/metrics directory as JSON files: strimzi-kafka.json strimzi-zookeeper.json strimzi-kafka-connect.json strimzi-kafka-mirror-maker-2.json strimzi-operators.json strimzi-kafka-bridge.json strimzi-cruise-control.json The example dashboards are a good starting point for monitoring key metrics, but they do not represent all available metrics. You can modify the example dashboards or add other metrics, depending on your infrastructure. After setting up Prometheus and Grafana, you can visualize the AMQ Streams data on the Grafana dashboards. Note No alert notification rules are defined. When accessing a dashboard, you can use the port-forward command to forward traffic from the Grafana pod to the host. Note The name of the Grafana pod is different for each user. Procedure Get the details of the Grafana service: oc get service grafana For example: NAME TYPE CLUSTER-IP PORT(S) grafana ClusterIP 172.30.123.40 3000/TCP Note the port number for port forwarding. Use port-forward to redirect the Grafana user interface to localhost:3000 : oc port-forward svc/grafana 3000:3000 Point a web browser to http://localhost:3000 . The Grafana Log In page appears. Enter your user name and password, and then click Log In . The default Grafana user name and password are both admin . After logging in for the first time, you can change the password. Add Prometheus as a data source . Specify a name Add Prometheus as the type Specify a Prometheus server URL ( http://prometheus-operated:9090 ) Save and test the connection when you have added the details. From Dashboards Import , upload the example dashboards or paste the JSON directly. On the top header, click the dashboard drop-down menu, and then select the dashboard you want to view. When the Prometheus server has been collecting metrics for a AMQ Streams cluster for some time, the dashboards are populated. Figure 6.1. Dashboard selection options AMQ Streams Kafka Shows metrics for: Brokers online count Active controllers in the cluster count Unclean leader election rate Replicas that are online Under-replicated partitions count Partitions which are at their minimum in sync replica count Partitions which are under their minimum in sync replica count Partitions that do not have an active leader and are hence not writable or readable Kafka broker pods memory usage Aggregated Kafka broker pods CPU usage Kafka broker pods disk usage JVM memory used JVM garbage collection time JVM garbage collection count Total incoming byte rate Total outgoing byte rate Incoming messages rate Total produce request rate Byte rate Produce request rate Fetch request rate Network processor average time idle percentage Request handler average time idle percentage Log size Figure 6.2. AMQ Streams Kafka dashboard AMQ Streams ZooKeeper Shows metrics for: Quorum Size of Zookeeper ensemble Number of alive connections Queued requests in the server count Watchers count ZooKeeper pods memory usage Aggregated ZooKeeper pods CPU usage ZooKeeper pods disk usage JVM memory used JVM garbage collection time JVM garbage collection count Amount of time it takes for the server to respond to a client request (maximum, minimum and average) AMQ Streams Kafka Connect Shows metrics for: Total incoming byte rate Total outgoing byte rate Disk usage JVM memory used JVM garbage collection time AMQ Streams Kafka MirrorMaker 2 Shows metrics for: Number of connectors Number of tasks Total incoming byte rate Total outgoing byte rate Disk usage JVM memory used JVM garbage collection time AMQ Streams Operators Shows metrics for: Custom resources Successful custom resource reconciliations per hour Failed custom resource reconciliations per hour Reconciliations without locks per hour Reconciliations started hour Periodical reconciliations per hour Maximum reconciliation time Average reconciliation time JVM memory used JVM garbage collection time JVM garbage collection count 6.5. Add Kafka Exporter Kafka Exporter is an open source project to enhance monitoring of Apache Kafka brokers and clients. Kafka Exporter is provided with AMQ Streams for deployment with a Kafka cluster to extract additional metrics data from Kafka brokers related to offsets, consumer groups, consumer lag, and topics. The metrics data is used, for example, to help identify slow consumers. Lag data is exposed as Prometheus metrics, which can then be presented in Grafana for analysis. If you are already using Prometheus and Grafana for monitoring of built-in Kafka metrics, you can configure Prometheus to also scrape the Kafka Exporter Prometheus endpoint. 6.5.1. Monitoring Consumer lag Consumer lag indicates the difference in the rate of production and consumption of messages. Specifically, consumer lag for a given consumer group indicates the delay between the last message in the partition and the message being currently picked up by that consumer. The lag reflects the position of the consumer offset in relation to the end of the partition log. Consumer lag between the producer and consumer offset This difference is sometimes referred to as the delta between the producer offset and consumer offset: the read and write positions in the Kafka broker topic partitions. Suppose a topic streams 100 messages a second. A lag of 1000 messages between the producer offset (the topic partition head) and the last offset the consumer has read means a 10-second delay. The importance of monitoring consumer lag For applications that rely on the processing of (near) real-time data, it is critical to monitor consumer lag to check that it does not become too big. The greater the lag becomes, the further the process moves from the real-time processing objective. Consumer lag, for example, might be a result of consuming too much old data that has not been purged, or through unplanned shutdowns. Reducing consumer lag Typical actions to reduce lag include: Scaling-up consumer groups by adding new consumers Increasing the retention time for a message to remain in a topic Adding more disk capacity to increase the message buffer Actions to reduce consumer lag depend on the underlying infrastructure and the use cases AMQ Streams is supporting. For instance, a lagging consumer is less likely to benefit from the broker being able to service a fetch request from its disk cache. And in certain cases, it might be acceptable to automatically drop messages until a consumer has caught up. 6.5.2. Example Kafka Exporter alerting rules If you performed the steps to introduce metrics to your deployment, you will already have your Kafka cluster configured to use the alert notification rules that support Kafka Exporter. The rules for Kafka Exporter are defined in prometheus-rules.yaml , and are deployed with Prometheus. For more information, see Prometheus . The sample alert notification rules specific to Kafka Exporter are as follows: UnderReplicatedPartition An alert to warn that a topic is under-replicated and the broker is not replicating to enough partitions. The default configuration is for an alert if there are one or more under-replicated partitions for a topic. The alert might signify that a Kafka instance is down or the Kafka cluster is overloaded. A planned restart of the Kafka broker may be required to restart the replication process. TooLargeConsumerGroupLag An alert to warn that the lag on a consumer group is too large for a specific topic partition. The default configuration is 1000 records. A large lag might indicate that consumers are too slow and are falling behind the producers. NoMessageForTooLong An alert to warn that a topic has not received messages for a period of time. The default configuration for the time period is 10 minutes. The delay might be a result of a configuration issue preventing a producer from publishing messages to the topic. Adapt the default configuration of these rules according to your specific needs. Additional resources Chapter 6, Setting up metrics and dashboards for AMQ Streams Section 6.1, "Example metrics files" Section 6.4.3.2, "Alerting rules" 6.5.3. Exposing Kafka Exporter metrics Lag information is exposed by Kafka Exporter as Prometheus metrics for presentation in Grafana. Kafka Exporter exposes metrics data for brokers, topics and consumer groups. The data extracted is described here. Table 6.2. Broker metrics output Name Information kafka_brokers Number of brokers in the Kafka cluster Table 6.3. Topic metrics output Name Information kafka_topic_partitions Number of partitions for a topic kafka_topic_partition_current_offset Current topic partition offset for a broker kafka_topic_partition_oldest_offset Oldest topic partition offset for a broker kafka_topic_partition_in_sync_replica Number of in-sync replicas for a topic partition kafka_topic_partition_leader Leader broker ID of a topic partition kafka_topic_partition_leader_is_preferred Shows 1 if a topic partition is using the preferred broker kafka_topic_partition_replicas Number of replicas for this topic partition kafka_topic_partition_under_replicated_partition Shows 1 if a topic partition is under-replicated Table 6.4. Consumer group metrics output Name Information kafka_consumergroup_current_offset Current topic partition offset for a consumer group kafka_consumergroup_lag Current approximate lag for a consumer group at a topic partition 6.5.4. Configuring Kafka Exporter This procedure shows how to configure Kafka Exporter in the Kafka resource through KafkaExporter properties. For more information about configuring the Kafka resource, see the sample Kafka YAML configuration in the Using AMQ Streams on OpenShift guide. The properties relevant to the Kafka Exporter configuration are shown in this procedure. You can configure these properties as part of a deployment or redeployment of the Kafka cluster. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the KafkaExporter properties for the Kafka resource. The properties you can configure are shown in this example configuration: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # ... kafkaExporter: image: my-org/my-image:latest 1 groupRegex: ".*" 2 topicRegex: ".*" 3 resources: 4 requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logging: debug 5 enableSaramaLogging: true 6 template: 7 pod: metadata: labels: label1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 8 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: 9 initialDelaySeconds: 15 timeoutSeconds: 5 # ... 1 ADVANCED OPTION: Container image configuration, which is recommended only in special situations . 2 A regular expression to specify the consumer groups to include in the metrics. 3 A regular expression to specify the topics to include in the metrics. 4 CPU and memory resources to reserve . 5 Logging configuration, to log messages with a given severity (debug, info, warn, error, fatal) or above. 6 Boolean to enable Sarama logging, a Go client library used by Kafka Exporter. 7 Customization of deployment templates and pods . 8 Healthcheck readiness probes . 9 Healthcheck liveness probes . Create or update the resource: oc apply -f kafka.yaml What to do After configuring and deploying Kafka Exporter, you can enable Grafana to present the Kafka Exporter dashboards . Additional resources KafkaExporterTemplate schema reference . 6.5.5. Enabling the Kafka Exporter Grafana dashboard AMQ Streams provides example dashboard configuration files for Grafana . The Kafka Exporter dashboard is provided in the examples/metrics directory as a JSON file: strimzi-kafka-exporter.json If you deployed Kafka Exporter with your Kafka cluster, you can visualize the metrics data it exposes on the Grafana dashboard. Prerequisites Kafka is deployed with Kafka Exporter metrics configuration Prometheus and Prometheus Alertmanager are deployed to the Kafka cluster Grafana is deployed to the Kafka cluster This procedure assumes you already have access to the Grafana user interface and Prometheus has been added as a data source. If you are accessing the user interface for the first time, see Grafana . Procedure Access the Grafana user interface . Select the Strimzi Kafka Exporter dashboard. When metrics data has been collected for some time, the Kafka Exporter charts are populated. AMQ Streams Kafka Exporter Shows metrics for: Topic count Partition count Replicas count In-sync replicas count Under-replicated partitions count Partitions which are at their minimum in sync replica count Partitions which are under their minimum in sync replica count Partitions not on a preferred node Messages in per second from topics Messages consumed per second from topics Messages consumed per minute by consumer groups Lag by consumer group Number of partitions Latest offsets Oldest offsets Use the Grafana charts to analyze lag and to check if actions to reduce lag are having an impact on an affected consumer group. If, for example, Kafka brokers are adjusted to reduce lag, the dashboard will show the Lag by consumer group chart going down and the Messages consumed per minute chart going up. 6.6. Monitor Kafka Bridge If you are already using Prometheus and Grafana for monitoring of built-in Kafka metrics, you can configure Prometheus to also scrape the Kafka Bridge Prometheus endpoint. The example Grafana dashboard for the Kafka Bridge provides: Information about HTTP connections and related requests to the different endpoints Information about the Kafka consumers and producers used by the bridge JVM metrics from the bridge itself 6.6.1. Configuring Kafka Bridge You can enable the Kafka Bridge metrics in the KafkaBridge resource using the enableMetrics property. You can configure this property as part of a deployment or redeployment of the Kafka Bridge. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaBridge metadata: name: my-bridge spec: # ... bootstrapServers: my-cluster-kafka:9092 http: # ... enableMetrics: true # ... 6.6.2. Enabling the Kafka Bridge Grafana dashboard If you deployed Kafka Bridge with your Kafka cluster, you can enable Grafana to present the metrics data it exposes. A Kafka Bridge dashboard is provided in the examples/metrics directory as a JSON file: strimzi-kafka-bridge.json When metrics data has been collected for some time, the Kafka Bridge charts are populated. Kafka Bridge Shows metrics for: HTTP connections to the Kafka Bridge count HTTP requests being processed count Requests processed per second grouped by HTTP method The total request rate grouped by response codes (2XX, 4XX, 5XX) Bytes received and sent per second Requests for each Kafka Bridge endpoint Number of Kafka consumers, producers, and related opened connections used by the Kafka Bridge itself Kafka producer: The average number of records sent per second (grouped by topic) The number of outgoing bytes sent to all brokers per second (grouped by topic) The average number of records per second that resulted in errors (grouped by topic) Kafka consumer: The average number of records consumed per second (grouped by clientId-topic) The average number of bytes consumed per second (grouped by clientId-topic) Partitions assigned (grouped by clientId) JVM memory used JVM garbage collection time JVM garbage collection count 6.7. Monitor Cruise Control If you are already using Prometheus and Grafana for monitoring of built-in Kafka metrics, you can configure Prometheus to also scrape the Cruise Control Prometheus endpoint. The example Grafana dashboard for Cruise Control provides: Information about optimization proposals computation, goals violation, cluster balancedness, and more Information about REST API calls for rebalance proposals and actual rebalance operations JVM metrics from Cruise Control itself 6.7.1. Configuring Cruise Control You can enable the Cruise Control metrics in the Kafka resource using the cruiseControl.metrics property that contains the JMX exporter configuration about the metrics to expose. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # ... kafka: # ... zookeeper: # ... cruiseControl: metrics: lowercaseOutputName: true rules: - pattern: kafka.cruisecontrol<name=(.+)><>(\w+) name: kafka_cruisecontrol_USD1_USD2 type: GAUGE 6.7.2. Enabling the Cruise Control Grafana dashboard If you deployed Cruise Control with your Kafka cluster with the metrics enabled, you can enable Grafana to present the metrics data it exposes. A Cruise Control dashboard is provided in the examples/metrics directory as a JSON file: strimzi-cruise-control.json When metrics data has been collected for some time, the Cruise Control charts are populated. Cruise Control Shows metrics for: Number of snapshot windows that are monitored by Cruise Control Number of time windows considered valid because they contain enough samples to compute an optimization proposal Number of ongoing executions running for proposals or rebalances Current balancedness score of the Kafka cluster as calculated by the anomaly detector component of Cruise Control (every 5 minutes by default) Percentage of monitored partitions Number of goal violations reported by the anomaly detector (every 5 minutes by default) How often a disk read failure happens on the brokers Rate of metric sample fetch failures Time needed to compute an optimization proposal Time needed to create the cluster model How often a proposal request or an actual rebalance request is made through the Cruise Control REST API How often the overall cluster state and the user tasks state are requested through the Cruise Control REST API JVM memory used JVM garbage collection time JVM garbage collection count
[ "metrics ├── grafana-dashboards 1 │ ├── strimzi-cruise-control.json │ ├── strimzi-kafka-bridge.json │ ├── strimzi-kafka-connect.json │ ├── strimzi-kafka-exporter.json │ ├── strimzi-kafka-mirror-maker-2.json │ ├── strimzi-kafka.json │ ├── strimzi-operators.json │ └── strimzi-zookeeper.json ├── grafana-install │ └── grafana.yaml 2 ├── prometheus-additional-properties │ └── prometheus-additional.yaml - OPENSHIFT 3.11 ONLY 3 ├── prometheus-alertmanager-config │ └── alert-manager-config.yaml 4 ├── prometheus-install │ ├── alert-manager.yaml - OPENSHIFT 3.11 ONLY 5 │ ├── prometheus-rules.yaml 6 │ ├── prometheus.yaml - OPENSHIFT 3.11 ONLY 7 │ ├── strimzi-pod-monitor.yaml 8 ├── kafka-bridge-metrics.yaml 9 ├── kafka-connect-metrics.yaml 10 ├── kafka-cruise-control-metrics.yaml 11 ├── kafka-metrics.yaml 12 └── kafka-mirror-maker-2-metrics.yaml 13", "edit kafka KAFKA-CONFIG-FILE", "apply -f kafka-metrics.yaml", "get pods -n openshift-user-workload-monitoring", "NAME READY STATUS RESTARTS AGE prometheus-operator-5cc59f9bc6-kgcq8 1/1 Running 0 25s prometheus-user-workload-0 5/5 Running 1 14s prometheus-user-workload-1 5/5 Running 1 14s thanos-ruler-user-workload-0 3/3 Running 0 14s thanos-ruler-user-workload-1 3/3 Running 0 14s", "apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: cluster-operator-metrics labels: app: strimzi spec: selector: matchLabels: strimzi.io/kind: cluster-operator namespaceSelector: matchNames: - PROJECT-NAME 1 podMetricsEndpoints: - path: /metrics port: http", "apply -f strimzi-pod-monitor.yaml -n MY-PROJECT", "apply -f prometheus-rules.yaml -n MY-PROJECT", "apiVersion: v1 kind: ServiceAccount metadata: name: grafana-serviceaccount labels: app: strimzi", "apply -f GRAFANA-SERVICEACCOUNT -n MY-PROJECT", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: grafana-cluster-monitoring-binding labels: app: strimzi subjects: - kind: ServiceAccount name: grafana-serviceaccount namespace: MY-PROJECT 1 roleRef: kind: ClusterRole name: cluster-monitoring-view apiGroup: rbac.authorization.k8s.io", "apply -f GRAFANA-CLUSTER-MONITORING-BINDING -n MY-PROJECT", "serviceaccounts get-token grafana-serviceaccount -n MY-PROJECT", "apiVersion: 1 datasources: - name: Prometheus type: prometheus url: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 access: proxy basicAuth: false withCredentials: false isDefault: true jsonData: timeInterval: 5s tlsSkipVerify: true httpHeaderName1: \"Authorization\" secureJsonData: httpHeaderValue1: \"Bearer USD{ GRAFANA-ACCESS-TOKEN }\" 1 editable: true", "create configmap grafana-config --from-file=datasource.yaml -n MY-PROJECT", "apiVersion: apps/v1 kind: Deployment metadata: name: grafana labels: app: strimzi spec: replicas: 1 selector: matchLabels: name: grafana template: metadata: labels: name: grafana spec: serviceAccountName: grafana-serviceaccount containers: - name: grafana image: grafana/grafana:6.3.0 ports: - name: grafana containerPort: 3000 protocol: TCP volumeMounts: - name: grafana-data mountPath: /var/lib/grafana - name: grafana-logs mountPath: /var/log/grafana - name: grafana-config mountPath: /etc/grafana/provisioning/datasources/datasource.yaml readOnly: true subPath: datasource.yaml readinessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /api/health port: 3000 initialDelaySeconds: 15 periodSeconds: 20 volumes: - name: grafana-data emptyDir: {} - name: grafana-logs emptyDir: {} - name: grafana-config configMap: name: grafana-config --- apiVersion: v1 kind: Service metadata: name: grafana labels: app: strimzi spec: ports: - name: grafana port: 3000 targetPort: 3000 protocol: TCP selector: name: grafana type: ClusterIP", "apply -f GRAFANA-APPLICATION -n MY-PROJECT", "create route edge MY-GRAFANA-ROUTE --service=grafana --namespace= KAFKA-NAMESPACE", "get routes NAME HOST/PORT PATH SERVICES MY-GRAFANA-ROUTE MY-GRAFANA-ROUTE-amq-streams.net grafana", "sed -i 's/namespace: .*/namespace: my-namespace /' prometheus.yaml", "sed -i '' 's/namespace: .*/namespace: my-namespace /' prometheus.yaml", "apply -f prometheus-additional.yaml", "apply -f strimzi-pod-monitor.yaml apply -f prometheus-rules.yaml apply -f prometheus.yaml", "create secret generic alertmanager-alertmanager --from-file=alertmanager.yaml=alert-manager-config.yaml", "apply -f alert-manager.yaml", "apply -f grafana.yaml", "get service grafana", "port-forward svc/grafana 3000:3000", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # kafkaExporter: image: my-org/my-image:latest 1 groupRegex: \".*\" 2 topicRegex: \".*\" 3 resources: 4 requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logging: debug 5 enableSaramaLogging: true 6 template: 7 pod: metadata: labels: label1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120 readinessProbe: 8 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: 9 initialDelaySeconds: 15 timeoutSeconds: 5", "apply -f kafka.yaml", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaBridge metadata: name: my-bridge spec: # bootstrapServers: my-cluster-kafka:9092 http: # enableMetrics: true #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: # kafka: # zookeeper: # cruiseControl: metrics: lowercaseOutputName: true rules: - pattern: kafka.cruisecontrol<name=(.+)><>(\\w+) name: kafka_cruisecontrol_USD1_USD2 type: GAUGE" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_and_upgrading_amq_streams_on_openshift/assembly-metrics-setup-str
Chapter 7. Setting up RHACS Cloud Service with Red Hat OpenShift secured clusters
Chapter 7. Setting up RHACS Cloud Service with Red Hat OpenShift secured clusters 7.1. Creating a RHACS Cloud instance on Red Hat Cloud Access Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) by selecting an instance in the Red Hat Hybrid Cloud Console. An ACS instance contains the RHACS Cloud Service management interface and services that Red Hat configures and manages for you. The management interface connects to your secured clusters, which contain the services that scan and collect information about vulnerabilities. One instance can connect to and monitor many clusters. 7.1.1. Creating an instance in the console In the Red Hat Hybrid Cloud Console, create an ACS instance to connect to your secured clusters. Procedure To create an ACS instance : Log in to the Red Hat Hybrid Cloud Console. From the navigation menu, select Advanced Cluster Security ACS Instances . Select Create ACS instance and enter information into the displayed fields or select the appropriate option from the drop-down list: Name : Enter the name of your ACS instance . An ACS instance contains the RHACS Central component, also referred to as "Central", which includes the RHACS Cloud Service management interface and services that are configured and managed by Red Hat. You manage your secured clusters that communicate with Central. You can connect many secured clusters to one instance. Cloud provider : The cloud provider where Central is located. Select AWS . Cloud region : The region for your cloud provider where Central is located. Select one of the following regions: US-East, N. Virginia Europe, Ireland Availability zones : Use the default value ( Multi ). Click Create instance . 7.1.2. steps On each Red Hat OpenShift cluster you want to secure, create a project named stackrox . This project will contain the resources for RHACS Cloud Service secured clusters. 7.2. Creating a project on your Red Hat OpenShift secured cluster Create a project on each Red Hat OpenShift cluster that you want to secure. You then use this project to install RHACS Cloud Service resources by using the Operator or Helm charts. 7.2.1. Creating a project on your cluster Procedure In your OpenShift Container Platform cluster, go to Home Projects and create a project for RHACS Cloud Service. Use stackrox as the project Name . 7.2.2. steps In the ACS Console, create an init bundle or cluster registration secret (CRS). The init bundle contains secrets that allow communication between RHACS Cloud Service secured clusters and Central. The CRS can also be used to set up this initial communication and is more flexible and secure. 7.3. Generating an init bundle or cluster registration secret for secured clusters Before you set up a secured cluster, you must create an init bundle or cluster registration secret (CRS). The secured cluster then uses this bundle or CRS to authenticate with the Central instance, also called Central. You can create an init bundle or CRS by using either the RHACS portal or the roxctl CLI. You then apply the init bundle or CRS by using it to create resources. Note You must have the Admin user role to create an init bundle. RHACS uses a special artifact during installation that allows the RHACS Central component to communicate securely with secured clusters that you are adding. Before the 4.7 release, RHACS used init bundles exclusively for initiating the secure communication channel. Beginning with 4.7, RHACS provides an alternative to init bundles called cluster registration secrets (CRSes). Important Cluster registration secrets is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Cluster registration secrets (CRSes) offer improved security and are easier to use. CRSes contain a single token that can be used when installing RHACS by using both Operator and Helm installation methods. CRSes provide better security because they are only used for registering a new secured cluster. If leaked, the certificates and keys in an init bundle can be used to impersonate services running on a secured cluster. By contrast, the certificate and key in a CRS can only be used for registering a new cluster. After the cluster is set up by using the CRS, service-specific certificates are issued by Central and sent to the new secured cluster. These service certificates are used for communication between Central and secured clusters. Therefore, a CRS can be revoked after the cluster is registered without disconnecting secured clusters. You can use either an init bundle or a cluster registration secret (CRS) during installation of a secured cluster. However, RHACS does not yet provide a way to create a CRS by using the portal. Therefore, you must create the CRS by using the roxctl CLI. Before you set up a secured cluster, you must create an init bundle or CRS. The secured cluster then uses this bundle or CRS to authenticate with Central. You can create an init bundle by using either the RHACS portal or the roxctl CLI. If you are using a CRS, you must use the roxctl CLI to create it. You can then apply the init bundle or the CRS by using the OpenShift Container Platform web console or by using the oc or kubectl CLI. If you install RHACS by using Helm, you provide the init bundle or CRS when you run the helm install command. 7.3.1. Generating an init bundle 7.3.1.1. Generating an init bundle by using the RHACS portal You can create an init bundle containing secrets by using the RHACS portal. Note You must have the Admin user role to create an init bundle. Procedure Find the address of the RHACS portal as described in "Verifying Central installation using the Operator method". Log in to the RHACS portal. If you do not have secured clusters, the Platform Configuration Clusters page appears. Click Create init bundle . Enter a name for the cluster init bundle. Select your platform. Select the installation method you will use for your secured clusters: Operator or Helm chart . Click Download to generate and download the init bundle, which is created in the form of a YAML file. You can use one init bundle and its corresponding YAML file for all secured clusters if you are using the same installation method. Important Store this bundle securely because it contains secrets. Apply the init bundle by using it to create resources on the secured cluster. Install secured cluster services on each cluster. 7.3.1.2. Generating an init bundle by using the roxctl CLI You can create an init bundle with secrets by using the roxctl CLI. Note You must have the Admin user role to create init bundles. Prerequisites You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables: Set the ROX_API_TOKEN by running the following command: USD export ROX_API_TOKEN=<api_token> Set the ROX_CENTRAL_ADDRESS environment variable by running the following command: USD export ROX_CENTRAL_ADDRESS=<address>:<port_number> Important In RHACS Cloud Service, when using roxctl commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com instead of acs-data-ABCD12345.acs.rhcloud.com . Procedure To generate a cluster init bundle containing secrets for Helm installations, run the following command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" \ central init-bundles generate <cluster_init_bundle_name> --output \ cluster_init_bundle.yaml To generate a cluster init bundle containing secrets for Operator installations, run the following command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" \ central init-bundles generate <cluster_init_bundle_name> --output-secrets \ cluster_init_bundle.yaml Important Ensure that you store this bundle securely because it contains secrets. You can use the same bundle to set up multiple secured clusters. 7.3.2. Generating a CRS 7.3.2.1. Generating a CRS by using the roxctl CLI You can create a cluster registration secret by using the roxctl CLI. Note You must have the Admin user role to create a CRS. Prerequisites You have configured the ROX_API_TOKEN and the ROX_CENTRAL_ADDRESS environment variables: Set the ROX_API_TOKEN by running the following command: USD export ROX_API_TOKEN=<api_token> Set the ROX_CENTRAL_ADDRESS environment variable by running the following command: USD export ROX_CENTRAL_ADDRESS=<address>:<port_number> Important In RHACS Cloud Service, when using roxctl commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com instead of acs-data-ABCD12345.acs.rhcloud.com . Procedure To generate a CRS, run the following command: USD roxctl -e "USDROX_CENTRAL_ADDRESS" \ central crs generate <crs_name> \ 1 --output <file_name> 2 1 Enter an identifier or name for the CRS. 2 Enter a file name or use - for standard output. Important Ensure that you store this file securely because it contains secrets. You can use the same file to set up multiple secured clusters. You cannot retrieve a previously-generated CRS. Depending on the output that you select, the command might return some INFO messages about the CRS and the YAML file. Sample output INFO: Successfully generated new CRS INFO: INFO: Name: test-crs INFO: Created at: 2025-02-26T19:07:21Z INFO: Expires at: 2026-02-26T19:07:00Z INFO: Created By: sample-token INFO: ID: 9214a63f-7e0e-485a-baae-0757b0860ac9 # This is a StackRox Cluster Registration Secret (CRS). # It is used for setting up StackRox secured clusters. # NOTE: This file contains secret data that allows connecting new secured clusters to central, # and needs to be handled and stored accordingly. apiVersion: v1 data: crs: EXAMPLEZXlKMlpYSnphVzl1SWpveExDSkRRWE1pT2xzaUxTMHRMUzFDUlVkSlRpQkRSVkpVU1VaSlEwREXAMPLE= kind: Secret metadata: annotations: crs.platform.stackrox.io/created-at: "2025-02-26T19:07:21.800414339Z" crs.platform.stackrox.io/expires-at: "2026-02-26T19:07:00Z" crs.platform.stackrox.io/id: 9214a63f-7e0e-485a-baae-0757b0860ac9 crs.platform.stackrox.io/name: test-crs creationTimestamp: null name: cluster-registration-secret INFO: Then CRS needs to be stored securely, since it contains secrets. INFO: It is not possible to retrieve previously generated CRSs. 7.3.3. steps Applying an init bundle or cluster registration secret for secured clusters 7.4. Applying an init bundle or cluster registration secret for secured clusters Apply the init bundle or cluster registration secret (CRS) by using it to create resources. Note You must have the Admin user role to apply an init bundle or CRS. 7.4.1. Applying the init bundle on the secured cluster Before you configure a secured cluster, you must apply the init bundle by using it to create the required resources on the secured cluster. Applying the init bundle allows the services on the secured cluster to communicate with RHACS Cloud Service. Note If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section. Prerequisites You must have generated an init bundle containing secrets. You must have created the stackrox project, or namespace, on the cluster where secured cluster services will be installed. Using stackrox for the project is not required, but ensures that vulnerabilities for RHACS processes are not reported when scanning your clusters. Procedure To create resources, perform only one of the following steps: Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, make sure that you are in the stackrox namespace. In the top menu, click + to open the Import YAML page. You can drag the init bundle file or copy and paste its contents into the editor, and then click Create . When the command is complete, the display shows that the collector-tls , sensor-tls , and admission-control-tls resources were created. Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources: USD oc create -f <init_bundle.yaml> \ 1 -n <stackrox> 2 1 Specify the file name of the init bundle containing the secrets. 2 Specify the name of the project where Central services are installed. Verification Restart Sensor to pick up the new certificates. For more information about how to restart Sensor, see "Restarting the Sensor container" in the "Additional resources" section. 7.4.2. Applying the cluster registration secret (CRS) on the secured cluster Before you configure a secured cluster, you must apply the CRS to the secured cluster. After you have applied the CRS, the services on the secured cluster can communicate securely with RHACS Cloud Service. Note If you are installing by using Helm charts, do not perform this step. Complete the installation by using Helm; See "Installing RHACS on secured clusters by using Helm charts" in the additional resources section. Prerequisites You must have generated a CRS. Procedure To create resources, perform only one of the following steps: Create resources using the OpenShift Container Platform web console: In the OpenShift Container Platform web console, go to the stackrox project or the project where you want to install the secured cluster services. In the top menu, click + to open the Import YAML page. You can drag the CRS file or copy and paste its contents into the editor, and then click Create . When the command is complete, the display shows that the secret named cluster-registration-secret was created. Create resources using the Red Hat OpenShift CLI: Using the Red Hat OpenShift CLI, run the following command to create the resources: USD oc create -f <file_name.yaml> \ 1 -n <stackrox> 2 1 Specify the file name of the CRS. 2 Specify the name of the project where secured cluster services are installed. Verification Restart Sensor to pick up the new certificates. For more information about how to restart Sensor, see "Restarting the Sensor container" in the "Additional resources" section. 7.4.3. steps On each Red Hat OpenShift cluster, install the RHACS Operator . Install RHACS secured cluster services in all clusters that you want to monitor. 7.4.4. Additional resources Restarting the Sensor container 7.5. Installing the Operator Install the RHACS Operator on your secured clusters. 7.5.1. Installing the RHACS Operator for RHACS Cloud Service Using the OperatorHub provided with OpenShift Container Platform is the easiest way to install the RHACS Operator. Prerequisites You have access to an OpenShift Container Platform cluster using an account with Operator installation permissions. You must be using OpenShift Container Platform 4.12 or later. For information about supported platforms and architecture, see the Red Hat Advanced Cluster Security for Kubernetes Support Matrix . Procedure In the web console, go to the Operators OperatorHub page. If Red Hat Advanced Cluster Security for Kubernetes is not displayed, enter Advanced Cluster Security into the Filter by keyword box to find the Red Hat Advanced Cluster Security for Kubernetes Operator. Select the Red Hat Advanced Cluster Security for Kubernetes Operator to view the details page. Read the information about the Operator, and then click Install . On the Install Operator page: Keep the default value for Installation mode as All namespaces on the cluster . Select a specific namespace in which to install the Operator for the Installed namespace field. Install the Red Hat Advanced Cluster Security for Kubernetes Operator in the rhacs-operator namespace. Select automatic or manual updates for Update approval . If you select automatic updates, when a new version of the Operator is available, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator. If you select manual updates, when a newer version of the Operator is available, OLM creates an update request. As a cluster administrator, you must manually approve the update request to update the Operator to the latest version. Red Hat recommends enabling automatic upgrades for Operator in RHACS Cloud Service. See the Red Hat Advanced Cluster Security for Kubernetes Support Matrix for more information. Click Install . Verification After the installation completes, go to Operators Installed Operators to verify that the Red Hat Advanced Cluster Security for Kubernetes Operator is listed with the status of Succeeded . 7.5.2. steps On each Red Hat OpenShift cluster, install secured cluster resources in the stackrox project . 7.6. Installing secured cluster resources from RHACS Cloud Service You can install RHACS Cloud Service on your secured clusters by using the Operator or Helm charts. You can also use the roxctl CLI to install it, but do not use this method unless you have a specific installation need that requires using it. Prerequisites During RHACS installation, you noted the Central instance address. You can view this information by choosing Advanced Cluster Security ACS Instances from the cloud console navigation menu, and then clicking the ACS instance you created. If you are installing by using the Operator, you created your Red Hat OpenShift cluster that you want to secure and installed the Operator on it. You created and downloaded the init bundle or cluster registration secret (CRS) by using the ACS Console or by using the roxctl CLI. You applied the init bundle or CRS on the cluster that you want to secure, unless you are installing by using a Helm chart. 7.6.1. Installing RHACS on secured clusters by using the Operator 7.6.1.1. Installing secured cluster services You can install Secured Cluster services on your clusters by using the Operator, which creates the SecuredCluster custom resource. You must install the Secured Cluster services on every cluster in your environment that you want to monitor. Important When you install Red Hat Advanced Cluster Security for Kubernetes: If you are installing RHACS for the first time, you must first install the Central custom resource because the SecuredCluster custom resource installation is dependent on certificates that Central generates. Do not install SecuredCluster in projects whose names start with kube , openshift , or redhat , or in the istio-system project. If you are installing RHACS SecuredCluster custom resource on a cluster that also hosts Central, ensure that you install it in the same namespace as Central. If you are installing Red Hat Advanced Cluster Security for Kubernetes SecuredCluster custom resource on a cluster that does not host Central, Red Hat recommends that you install the Red Hat Advanced Cluster Security for Kubernetes SecuredCluster custom resource in its own project and not in the project in which you have installed the Red Hat Advanced Cluster Security for Kubernetes Operator. Prerequisites If you are using OpenShift Container Platform, you must install version 4.12 or later. You have installed the RHACS Operator on the cluster that you want to secure, called the secured cluster. You have generated an init bundle or cluster registration secret (CRS) and applied it to the cluster in the recommended stackrox namespace. Procedure On the OpenShift Container Platform web console for the secured cluster, go to the Operators Installed Operators page. Click the RHACS Operator. If you have installed the Operator in the recommended namespace, OpenShift Container Platform lists the project as rhacs-operator . Select Project: rhacs-operator Create project . Note If you installed the Operator in a different namespace, OpenShift Container Platform lists the name of that namespace instead of rhacs-operator . Click Installed Operators . You should have created the stackrox namespace when you applied the init bundle or the CRS. Make sure that you are in this namespace by verifying that Project:stackrox is selected in the menu. In Provided APIs , click Secured Cluster . Click Create SecuredCluster . Select one of the following options in the Configure via field: Form view : Use this option if you want to use the on-screen fields to configure the secured cluster and do not need to change any other fields. YAML view : Use this view to set up the secured cluster by using the YAML file. The YAML file is displayed in the window and you can edit fields in it. If you select this option, when you are finished editing the file, click Create . If you are using Form view , enter the new project name by accepting or editing the default name. The default value is stackrox-secured-cluster-services . Optional: Add any labels for the cluster. Enter a unique name for your SecuredCluster custom resource. For Central Endpoint , enter the address of your Central instance. For example, if Central is available at https://central.example.com , then specify the central endpoint as central.example.com . For RHACS Cloud Service use the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created. Use the default value of central.stackrox.svc:443 only if you are installing secured cluster services in the same cluster where Central is installed. Do not use the default value when you are configuring multiple clusters. Instead, use the hostname when configuring the Central Endpoint value for each cluster. For the remaining fields, accept the default values or configure custom values if needed. For example, you might need to configure TLS if you are using custom certificates or untrusted CAs. See "Configuring Secured Cluster services options for RHACS using the Operator" for more information. Click Create . After a brief pause, the SecuredClusters page displays the status of stackrox-secured-cluster-services . You might see the following conditions: Conditions: Deployed, Initialized : The secured cluster services have been installed and the secured cluster is communicating with Central. Conditions: Initialized, Irreconcilable : The secured cluster is not communicating with Central. Make sure that you applied the init bundle you created in the RHACS web portal to the secured cluster. steps Configure additional secured cluster settings (optional). Verify installation. 7.6.2. Installing RHACS Cloud Service on secured clusters by using Helm charts You can install RHACS on secured clusters by using Helm charts with no customization, using the default values, or with customizations of configuration parameters. First, ensure that you add the Helm chart repository. 7.6.2.1. Adding the Helm chart repository Procedure Add the RHACS charts repository. USD helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/ The Helm repository for Red Hat Advanced Cluster Security for Kubernetes includes Helm charts for installing different components, including: Central services Helm chart ( central-services ) for installing the centralized components (Central and Scanner). Note You deploy centralized components only once and you can monitor multiple separate clusters by using the same installation. Secured Cluster Services Helm chart ( secured-cluster-services ) for installing the per-cluster and per-node components (Sensor, Admission Controller, Collector, and Scanner-slim). Note Deploy the per-cluster components into each cluster that you want to monitor and deploy the per-node components in all nodes that you want to monitor. Verification Run the following command to verify the added chart repository: USD helm search repo -l rhacs/ 7.6.2.2. Installing RHACS Cloud Service on secured clusters by using Helm charts without customizations 7.6.2.2.1. Installing the secured-cluster-services Helm chart without customization Use the following instructions to install the secured-cluster-services Helm chart to deploy the per-cluster and per-node components (Sensor, Admission controller, Collector, and Scanner-slim). Prerequisites You must have generated an RHACS init bundle or CRS for your cluster. You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io , see Red Hat Container Registry Authentication . You must have the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the ACS instance you created. Procedure Run one of the following commands on your Kubernetes-based clusters: If you are using an init bundle, run the following command: USD helm install -n stackrox --create-namespace \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <path_to_cluster_init_bundle.yaml> \ 1 -f <path_to_pull_secret.yaml> \ 2 --set clusterName=<name_of_the_secured_cluster> \ --set centralEndpoint=<endpoint_of_central_service> \ 3 --set imagePullSecrets.username=<your redhat.com username> \ 4 --set imagePullSecrets.password=<your redhat.com password> 5 1 Use the -f option to specify the path for the init bundle. 2 Use the -f option to specify the path for the pull secret for Red Hat Container Registry authentication. 3 Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created. 4 Include the user name for your pull secret for Red Hat Container Registry authentication. 5 Include the password for your pull secret for Red Hat Container Registry authentication. Procedure Run one of the following commands on an OpenShift Container Platform cluster: If you are using an init bundle, run the following command: USD helm install -n stackrox --create-namespace \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <path_to_cluster_init_bundle.yaml> \ 1 -f <path_to_pull_secret.yaml> \ 2 --set clusterName=<name_of_the_secured_cluster> \ --set centralEndpoint=<endpoint_of_central_service> \ 3 --set scanner.disable=false 4 1 Use the -f option to specify the path for the init bundle. 2 Use the -f option to specify the path for the pull secret for Red Hat Container Registry authentication. 3 Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created. 4 Set the value of the scanner.disable parameter to false , which means that Scanner-slim will be enabled during the installation. In Kubernetes, the secured cluster services now include Scanner-slim. If you are using a CRS, run the following command: USD helm install -n stackrox --create-namespace \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --set-file crs.file=<crs_file_name.yaml> \ 1 -f <path_to_pull_secret.yaml> \ 2 --set clusterName=<name_of_the_secured_cluster> \ --set centralEndpoint=<endpoint_of_central_service> \ 3 --set scanner.disable=false 4 1 Use the name of the file in which the generated CRS has been stored. 2 Use the -f option to specify the path for the pull secret for Red Hat Container Registry authentication. 3 Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created. 4 Set the value of the scanner.disable parameter to false , which means that Scanner-slim will be enabled during the installation. In Kubernetes, the secured cluster services now include Scanner-slim. Additional resources Generating an init bundle for secured clusters Applying an init bundle for secured clusters 7.6.2.3. Configuring the secured-cluster-services Helm chart with customizations You can use Helm chart configuration parameters with the helm install and helm upgrade commands. Specify these parameters by using the --set option or by creating YAML configuration files. Create the following files for configuring the Helm chart for installing Red Hat Advanced Cluster Security for Kubernetes: Public configuration file values-public.yaml : Use this file to save all non-sensitive configuration options. Private configuration file values-private.yaml : Use this file to save all sensitive configuration options. Ensure that you store this file securely. Important When using the secured-cluster-services Helm chart, do not change the values.yaml file that is part of the chart. 7.6.2.3.1. Configuration parameters Parameter Description clusterName Name of your cluster. centralEndpoint Address of the Central endpoint. If you are using a non-gRPC capable load balancer, use the WebSocket protocol by prefixing the endpoint address with wss:// . When configuring multiple clusters, use the hostname for the address. For example, central.example.com . sensor.endpoint Address of the Sensor endpoint including port number. sensor.imagePullPolicy Image pull policy for the Sensor container. sensor.serviceTLS.cert The internal service-to-service TLS certificate that Sensor uses. sensor.serviceTLS.key The internal service-to-service TLS certificate key that Sensor uses. sensor.resources.requests.memory The memory request for the Sensor container. Use this parameter to override the default value. sensor.resources.requests.cpu The CPU request for the Sensor container. Use this parameter to override the default value. sensor.resources.limits.memory The memory limit for the Sensor container. Use this parameter to override the default value. sensor.resources.limits.cpu The CPU limit for the Sensor container. Use this parameter to override the default value. sensor.nodeSelector Specify a node selector label as label-key: label-value to force Sensor to only schedule on nodes with the specified label. sensor.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Sensor. This parameter is mainly used for infrastructure nodes. image.main.name The name of the main image. image.collector.name The name of the Collector image. image.main.registry The address of the registry you are using for the main image. image.collector.registry The address of the registry you are using for the Collector image. image.scanner.registry The address of the registry you are using for the Scanner image. image.scannerDb.registry The address of the registry you are using for the Scanner DB image. image.scannerV4.registry The address of the registry you are using for the Scanner V4 image. image.scannerV4DB.registry The address of the registry you are using for the Scanner V4 DB image. image.main.pullPolicy Image pull policy for main images. image.collector.pullPolicy Image pull policy for the Collector images. image.main.tag Tag of main image to use. image.collector.tag Tag of collector image to use. collector.collectionMethod Either CORE_BPF or NO_COLLECTION . collector.imagePullPolicy Image pull policy for the Collector container. collector.complianceImagePullPolicy Image pull policy for the Compliance container. collector.disableTaintTolerations If you specify false , tolerations are applied to Collector, and the collector pods can schedule onto all nodes with taints. If you specify it as true , no tolerations are applied, and the collector pods are not scheduled onto nodes with taints. collector.resources.requests.memory The memory request for the Collector container. Use this parameter to override the default value. collector.resources.requests.cpu The CPU request for the Collector container. Use this parameter to override the default value. collector.resources.limits.memory The memory limit for the Collector container. Use this parameter to override the default value. collector.resources.limits.cpu The CPU limit for the Collector container. Use this parameter to override the default value. collector.complianceResources.requests.memory The memory request for the Compliance container. Use this parameter to override the default value. collector.complianceResources.requests.cpu The CPU request for the Compliance container. Use this parameter to override the default value. collector.complianceResources.limits.memory The memory limit for the Compliance container. Use this parameter to override the default value. collector.complianceResources.limits.cpu The CPU limit for the Compliance container. Use this parameter to override the default value. collector.serviceTLS.cert The internal service-to-service TLS certificate that Collector uses. collector.serviceTLS.key The internal service-to-service TLS certificate key that Collector uses. admissionControl.listenOnCreates This setting controls whether Kubernetes is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for workload creation events. admissionControl.listenOnUpdates When you set this parameter as false , Red Hat Advanced Cluster Security for Kubernetes creates the ValidatingWebhookConfiguration in a way that causes the Kubernetes API server not to send object update events. Since the volume of object updates is usually higher than the object creates, leaving this as false limits the load on the admission control service and decreases the chances of a malfunctioning admission control service. admissionControl.listenOnEvents This setting controls whether the cluster is configured to contact Red Hat Advanced Cluster Security for Kubernetes with AdmissionReview requests for Kubernetes exec and portforward events. RHACS does not support this feature on OpenShift Container Platform 3.11. admissionControl.dynamic.enforceOnCreates This setting controls whether Red Hat Advanced Cluster Security for Kubernetes evaluates policies; if it is disabled, all AdmissionReview requests are automatically accepted. admissionControl.dynamic.enforceOnUpdates This setting controls the behavior of the admission control service. You must specify listenOnUpdates as true for this to work. admissionControl.dynamic.scanInline If you set this option to true , the admission control service requests an image scan before making an admission decision. Since image scans take several seconds, enable this option only if you can ensure that all images used in your cluster are scanned before deployment (for example, by a CI integration during image build). This option corresponds to the Contact image scanners option in the RHACS portal. admissionControl.dynamic.disableBypass Set it to true to disable bypassing the Admission controller. admissionControl.dynamic.timeout Use this parameter to specify the maximum number of seconds RHACS must wait for an admission review before marking it as fail open. If the admission webhook does not receive information that it is requesting before the end of the timeout period, it fails, but in fail open status, it still allows the operation to succeed. For example, the admission controller would allow a deployment to be created even if a scan had timed out and RHACS could not determine if the deployment violated a policy. Beginning in release 4.5, Red Hat reduced the default timeout setting for the RHACS admission controller webhooks from 20 seconds to 10 seconds, resulting in an effective timeout of 12 seconds within the ValidatingWebhookConfiguration . This change does not negatively affect OpenShift Container Platform users because OpenShift Container Platform caps the timeout at 13 seconds. admissionControl.resources.requests.memory The memory request for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.requests.cpu The CPU request for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.limits.memory The memory limit for the Admission Control container. Use this parameter to override the default value. admissionControl.resources.limits.cpu The CPU limit for the Admission Control container. Use this parameter to override the default value. admissionControl.nodeSelector Specify a node selector label as label-key: label-value to force Admission Control to only schedule on nodes with the specified label. admissionControl.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Admission Control. This parameter is mainly used for infrastructure nodes. admissionControl.namespaceSelector If the admission controller webhook needs a specific namespaceSelector , you can specify the corresponding selector here. Use this parameter to override the default, which avoids a few system namespaces. admissionControl.serviceTLS.cert The internal service-to-service TLS certificate that Admission Control uses. admissionControl.serviceTLS.key The internal service-to-service TLS certificate key that Admission Control uses. registryOverride Use this parameter to override the default docker.io registry. Specify the name of your registry if you are using some other registry. collector.disableTaintTolerations If you specify false , tolerations are applied to Collector, and the Collector pods can schedule onto all nodes with taints. If you specify it as true , no tolerations are applied, and the Collector pods are not scheduled onto nodes with taints. createUpgraderServiceAccount Specify true to create the sensor-upgrader account. By default, Red Hat Advanced Cluster Security for Kubernetes creates a service account called sensor-upgrader in each secured cluster. This account is highly privileged but is only used during upgrades. If you do not create this account, you must complete future upgrades manually if the Sensor does not have enough permissions. createSecrets Specify false to skip the orchestrator secret creation for the Sensor, Collector, and Admission controller. collector.slimMode Deprecated. Specify true if you want to use a slim Collector image for deploying Collector. sensor.resources Resource specification for Sensor. admissionControl.resources Resource specification for Admission controller. collector.resources Resource specification for Collector. collector.complianceResources Resource specification for Collector's Compliance container. exposeMonitoring If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes exposes Prometheus metrics endpoints on port number 9090 for the Sensor, Collector, and the Admission controller. auditLogs.disableCollection If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes disables the audit log detection features used to detect access and modifications to configuration maps and secrets. scanner.disable If you set this option to false , Red Hat Advanced Cluster Security for Kubernetes deploys a Scanner-slim and Scanner DB in the secured cluster to allow scanning images on the integrated OpenShift image registry. Enabling Scanner-slim is supported on OpenShift Container Platform and Kubernetes secured clusters. Defaults to true . scanner.dbTolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. scanner.replicas Resource specification for Collector's Compliance container. scanner.logLevel Setting this parameter allows you to modify the scanner log level. Use this option only for troubleshooting purposes. scanner.autoscaling.disable If you set this option to true , Red Hat Advanced Cluster Security for Kubernetes disables autoscaling on the Scanner deployment. scanner.autoscaling.minReplicas The minimum number of replicas for autoscaling. Defaults to 2. scanner.autoscaling.maxReplicas The maximum number of replicas for autoscaling. Defaults to 5. scanner.nodeSelector Specify a node selector label as label-key: label-value to force Scanner to only schedule on nodes with the specified label. scanner.tolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner. scanner.dbNodeSelector Specify a node selector label as label-key: label-value to force Scanner DB to only schedule on nodes with the specified label. scanner.dbTolerations If the node selector selects tainted nodes, use this parameter to specify a taint toleration key, value, and effect for Scanner DB. scanner.resources.requests.memory The memory request for the Scanner container. Use this parameter to override the default value. scanner.resources.requests.cpu The CPU request for the Scanner container. Use this parameter to override the default value. scanner.resources.limits.memory The memory limit for the Scanner container. Use this parameter to override the default value. scanner.resources.limits.cpu The CPU limit for the Scanner container. Use this parameter to override the default value. scanner.dbResources.requests.memory The memory request for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.requests.cpu The CPU request for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.limits.memory The memory limit for the Scanner DB container. Use this parameter to override the default value. scanner.dbResources.limits.cpu The CPU limit for the Scanner DB container. Use this parameter to override the default value. monitoring.openshift.enabled If you set this option to false , Red Hat Advanced Cluster Security for Kubernetes will not set up Red Hat OpenShift monitoring. Defaults to true on Red Hat OpenShift 4. network.enableNetworkPolicies To provide security at the network level, RHACS creates default NetworkPolicy resources in the namespace where secured cluster resources are installed. These network policies allow ingress to specific components on specific ports. If you do not want RHACS to create these policies, set this parameter to False . This is a Boolean value. The default value is True , which means the default policies are automatically created. Warning Disabling creation of default network policies can break communication between RHACS components. If you disable creation of default policies, you must create your own network policies to allow this communication. 7.6.2.3.1.1. Environment variables You can specify environment variables for Sensor and Admission controller in the following format: customize: envVars: ENV_VAR1: "value1" ENV_VAR2: "value2" The customize setting allows you to specify custom Kubernetes metadata (labels and annotations) for all objects created by this Helm chart and additional pod labels, pod annotations, and container environment variables for workloads. The configuration is hierarchical, in the sense that metadata defined at a more generic scope (for example, for all objects) can be overridden by metadata defined at a narrower scope (for example, only for the Sensor deployment). 7.6.2.3.2. Installing the secured-cluster-services Helm chart with customizations After you configure the values-public.yaml and values-private.yaml files, install the secured-cluster-services Helm chart to deploy the following per-cluster and per-node components: Sensor Admission controller Collector Scanner: optional for secured clusters when the StackRox Scanner is installed Scanner DB: optional for secured clusters when the StackRox Scanner is installed Scanner V4 Indexer and Scanner V4 DB: optional for secured clusters when Scanner V4 is installed Prerequisites You must have generated an RHACS init bundle for your cluster. You must have access to the Red Hat Container Registry and a pull secret for authentication. For information about downloading images from registry.redhat.io , see Red Hat Container Registry Authentication . You must have the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created. Procedure Run the following command: USD helm install -n stackrox \ --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services \ -f <name_of_cluster_init_bundle.yaml> \ -f <path_to_values_public.yaml> \ 1 -f <path_to_values_private.yaml> \ 2 --set imagePullSecrets.username=<username> \ 3 --set imagePullSecrets.password=<password> 4 1 Use the -f option to specify the paths for your public YAML configuration file. 2 Use the -f option to specify the paths for your private YAML configuration file. 3 Include the user name for your pull secret for Red Hat Container Registry authentication. 4 Include the password for your pull secret for Red Hat Container Registry authentication. Note To deploy secured-cluster-services Helm chart by using a continuous integration (CI) system, pass the init bundle YAML file as an environment variable to the helm install command: USD helm install ... -f <(echo "USDINIT_BUNDLE_YAML_SECRET") 1 1 If you are using base64 encoded variables, use the helm install ... -f <(echo "USDINIT_BUNDLE_YAML_SECRET" | base64 --decode) command instead. Additional resources Generating an init bundle for secured clusters Applying an init bundle for secured clusters 7.6.2.4. Changing configuration options after deploying the secured-cluster-services Helm chart You can make changes to any configuration options after you have deployed the secured-cluster-services Helm chart. When using the helm upgrade command to make changes, the following guidelines and requirements apply: You can also specify configuration values using the --set or --set-file parameters. However, these options are not saved, and you must manually specify all the options again whenever you make changes. Some changes, such as enabling a new component like Scanner V4, require new certificates to be issued for the component. Therefore, you must provide a CA when making these changes. If the CA was generated by the Helm chart during the initial installation, you must retrieve these automatically generated values from the cluster and provide them to the helm upgrade command. The post-installation notes of the central-services Helm chart include a command for retrieving the automatically generated values. If the CA was generated outside of the Helm chart and provided during the installation of the central-services chart, then you must perform that action again when using the helm upgrade command, for example, by using the --reuse-values flag with the helm upgrade command. Procedure Update the values-public.yaml and values-private.yaml configuration files with new values. Run the helm upgrade command and specify the configuration files using the -f option: USD helm upgrade -n stackrox \ stackrox-secured-cluster-services rhacs/secured-cluster-services \ --reuse-values \ 1 -f <path_to_values_public.yaml> \ -f <path_to_values_private.yaml> 1 If you have modified values that are not included in the values_public.yaml and values_private.yaml files, include the --reuse-values parameter. 7.6.3. Installing RHACS on secured clusters by using the roxctl CLI To install RHACS on secured clusters by using the CLI, perform the following steps: Install the roxctl CLI. Install Sensor. 7.6.3.1. Installing the roxctl CLI You must first download the binary. You can install roxctl on Linux, Windows, or macOS. 7.6.3.1.1. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 7.6.3.1.2. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 7.6.3.1.3. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 7.6.3.2. Installing Sensor To monitor a cluster, you must deploy Sensor. You must deploy Sensor into each cluster that you want to monitor. This installation method is also called the manifest installation method. To perform an installation by using the manifest installation method, follow only one of the following procedures: Use the RHACS web portal to download the cluster bundle, and then extract and run the sensor script. Use the roxctl CLI to generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance. Prerequisites You must have already installed Central services, or you can access Central services by selecting your ACS instance on Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service). 7.6.3.2.1. Manifest installation method by using the web portal Procedure On your secured cluster, in the RHACS portal, go to Platform Configuration Clusters . Select Secure a cluster Legacy installation method . Specify a name for the cluster. Provide appropriate values for the fields based on where you are deploying the Sensor. Enter the Central API Endpoint address. You can view this information by choosing Advanced Cluster Security ACS Instances from the Red Hat Hybrid Cloud Console navigation menu, then clicking the RHACS instance you created. Click to continue with the Sensor setup. Click Download YAML File and Keys to download the cluster bundle (zip archive). Important The cluster bundle zip archive includes unique configurations and keys for each cluster. Do not reuse the same files in another cluster. From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle: USD unzip -d sensor sensor-<cluster_name>.zip USD ./sensor/sensor.sh If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help. After Sensor is deployed, it contacts Central and provides cluster information. 7.6.3.2.2. Manifest installation by using the roxctl CLI Procedure Generate the required sensor configuration for your OpenShift Container Platform cluster and associate it with your Central instance by running the following command: USD roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central "USDROX_ENDPOINT" 1 1 For the --openshift-version option, specify the major OpenShift Container Platform version number for your cluster. For example, specify 3 for OpenShift Container Platform version 3.x and specify 4 for OpenShift Container Platform version 4.x . From a system that has access to the monitored cluster, extract and run the sensor script from the cluster bundle: USD unzip -d sensor sensor-<cluster_name>.zip USD ./sensor/sensor.sh If you get a warning that you do not have the required permissions to deploy Sensor, follow the on-screen instructions, or contact your cluster administrator for help. After Sensor is deployed, it contacts Central and provides cluster information. Verification Return to the RHACS portal and check if the deployment is successful. If successful, when viewing your list of clusters in Platform Configuration Clusters , the cluster status displays a green checkmark and a Healthy status. If you do not see a green checkmark, use the following command to check for problems: On OpenShift Container Platform, enter the following command: USD oc get pod -n stackrox -w On Kubernetes, enter the following command: USD kubectl get pod -n stackrox -w Click Finish to close the window. After installation, Sensor starts reporting security information to RHACS and the RHACS portal dashboard begins showing deployments, images, and policy violations from the cluster on which you have installed the Sensor. 7.6.4. steps Verify installation by ensuring that your secured clusters can communicate with the ACS instance. 7.7. Configuring the proxy for secured cluster services in RHACS Cloud Service You must configure the proxy settings for secured cluster services within the Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service) environment to establish a connection between the Secured Cluster and the specified proxy server. This ensures reliable data collection and transmission. 7.7.1. Specifying the environment variables in the SecuredCluster CR To configure an egress proxy, you can either use the cluster-wide Red Hat OpenShift proxy or specify the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables within the SecuredCluster Custom Resource (CR) configuration file to ensure proper use of the proxy and bypass for internal requests within the specified domain. The proxy configuration applies to all running services: Sensor, Collector, Admission Controller and Scanner. Procedure Specify the HTTP_PROXY , HTTPS_PROXY , and NO_PROXY environment variables under the customize specification in the SecuredCluster CR configuration file: For example: # proxy collector customize: envVars: - name: HTTP_PROXY value: http://egress-proxy.stackrox.svc:xxxx 1 - name: HTTPS_PROXY value: http://egress-proxy.stackrox.svc:xxxx 2 - name: NO_PROXY value: .stackrox.svc 3 1 The variable HTTP_PROXY is set to the value http://egress-proxy.stackrox.svc:xxxx . This is the proxy server used for HTTP connections. 2 The variable HTTPS_PROXY is set to the value http://egress-proxy.stackrox.svc:xxxx . This is the proxy server used for HTTPS connections. 3 The variable NO _PROXY is set to .stackrox.svc . This variable is used to define the hostname or IP address that should not be accessed through the proxy server. 7.8. Verifying installation of secured clusters After installing RHACS Cloud Service, you can perform some steps to verify that the installation was successful. To verify installation, access your ACS Console from the Red Hat Hybrid Cloud Console. The Dashboard displays the number of clusters that RHACS Cloud Service is monitoring, along with information about nodes, deployments, images, and violations. If no data appears in the ACS Console: Ensure that at least one secured cluster is connected to your RHACS Cloud Service instance. For more information, see Installing secured cluster resources from RHACS Cloud Service . Examine your Sensor pod logs to ensure that the connection to your RHACS Cloud Service instance is successful. In the Red Hat OpenShift cluster, go to Platform Configuration Clusters to verify that the components are healthy and view additional operational information. Examine the values in the SecuredCluster API in the Operator on your local cluster to ensure that the Central API Endpoint has been entered correctly. This value should be the same value as shown in the ACS instance details in the Red Hat Hybrid Cloud Console.
[ "export ROX_API_TOKEN=<api_token>", "export ROX_CENTRAL_ADDRESS=<address>:<port_number>", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate <cluster_init_bundle_name> --output cluster_init_bundle.yaml", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central init-bundles generate <cluster_init_bundle_name> --output-secrets cluster_init_bundle.yaml", "export ROX_API_TOKEN=<api_token>", "export ROX_CENTRAL_ADDRESS=<address>:<port_number>", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central crs generate <crs_name> \\ 1 --output <file_name> 2", "INFO: Successfully generated new CRS INFO: INFO: Name: test-crs INFO: Created at: 2025-02-26T19:07:21Z INFO: Expires at: 2026-02-26T19:07:00Z INFO: Created By: sample-token INFO: ID: 9214a63f-7e0e-485a-baae-0757b0860ac9 This is a StackRox Cluster Registration Secret (CRS). It is used for setting up StackRox secured clusters. NOTE: This file contains secret data that allows connecting new secured clusters to central, and needs to be handled and stored accordingly. apiVersion: v1 data: crs: EXAMPLEZXlKMlpYSnphVzl1SWpveExDSkRRWE1pT2xzaUxTMHRMUzFDUlVkSlRpQkRSVkpVU1VaSlEwREXAMPLE= kind: Secret metadata: annotations: crs.platform.stackrox.io/created-at: \"2025-02-26T19:07:21.800414339Z\" crs.platform.stackrox.io/expires-at: \"2026-02-26T19:07:00Z\" crs.platform.stackrox.io/id: 9214a63f-7e0e-485a-baae-0757b0860ac9 crs.platform.stackrox.io/name: test-crs creationTimestamp: null name: cluster-registration-secret INFO: Then CRS needs to be stored securely, since it contains secrets. INFO: It is not possible to retrieve previously generated CRSs.", "oc create -f <init_bundle.yaml> \\ 1 -n <stackrox> 2", "oc create -f <file_name.yaml> \\ 1 -n <stackrox> 2", "helm repo add rhacs https://mirror.openshift.com/pub/rhacs/charts/", "helm search repo -l rhacs/", "helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <path_to_cluster_init_bundle.yaml> \\ 1 -f <path_to_pull_secret.yaml> \\ 2 --set clusterName=<name_of_the_secured_cluster> --set centralEndpoint=<endpoint_of_central_service> \\ 3 --set imagePullSecrets.username=<your redhat.com username> \\ 4 --set imagePullSecrets.password=<your redhat.com password> 5", "helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <path_to_cluster_init_bundle.yaml> \\ 1 -f <path_to_pull_secret.yaml> \\ 2 --set clusterName=<name_of_the_secured_cluster> --set centralEndpoint=<endpoint_of_central_service> \\ 3 --set scanner.disable=false 4", "helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services --set-file crs.file=<crs_file_name.yaml> \\ 1 -f <path_to_pull_secret.yaml> \\ 2 --set clusterName=<name_of_the_secured_cluster> --set centralEndpoint=<endpoint_of_central_service> \\ 3 --set scanner.disable=false 4", "customize: envVars: ENV_VAR1: \"value1\" ENV_VAR2: \"value2\"", "helm install -n stackrox --create-namespace stackrox-secured-cluster-services rhacs/secured-cluster-services -f <name_of_cluster_init_bundle.yaml> -f <path_to_values_public.yaml> \\ 1 -f <path_to_values_private.yaml> \\ 2 --set imagePullSecrets.username=<username> \\ 3 --set imagePullSecrets.password=<password> 4", "helm install ... -f <(echo \"USDINIT_BUNDLE_YAML_SECRET\") 1", "helm upgrade -n stackrox stackrox-secured-cluster-services rhacs/secured-cluster-services --reuse-values \\ 1 -f <path_to_values_public.yaml> -f <path_to_values_private.yaml>", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Linux/roxctlUSD{arch}\"", "chmod +x roxctl", "echo USDPATH", "roxctl version", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Darwin/roxctlUSD{arch}\"", "xattr -c roxctl", "chmod +x roxctl", "echo USDPATH", "roxctl version", "curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.7.0/bin/Windows/roxctl.exe", "roxctl version", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/sensor.sh", "roxctl sensor generate openshift --openshift-version <ocp_version> --name <cluster_name> --central \"USDROX_ENDPOINT\" 1", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/sensor.sh", "oc get pod -n stackrox -w", "kubectl get pod -n stackrox -w", "proxy collector customize: envVars: - name: HTTP_PROXY value: http://egress-proxy.stackrox.svc:xxxx 1 - name: HTTPS_PROXY value: http://egress-proxy.stackrox.svc:xxxx 2 - name: NO_PROXY value: .stackrox.svc 3" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/rhacs_cloud_service/setting-up-rhacs-cloud-service-with-red-hat-openshift-secured-clusters
4.23. bltk
4.23. bltk 4.23.1. RHBA-2011:1227 - bltk bug fix update An updated bltk package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The bltk (Battery Life Tool Kit) package includes binaries and scripts to test battery life. Bug Fixes BZ# 618308 Prior to this update, the bltk tree was corrupted. As a result, the bltk_report script failed. This update modifies the settings of the bltk root path. Now, the report script works as expected. BZ# 679028 Prior to this update, bltk could be installed without requiring the gnuplot binary. As a result, the bltk_plot script exited with an error message when the gnuplot package was not installed and the charts were shown from measured data. This update requires the gnuplot package for its installation. Now, the bltk_plot script no longer exits with an error. All bltk users are advised to upgrade to this updated package, which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/bltk
Chapter 2. Downloading log files and diagnostic information using must-gather
Chapter 2. Downloading log files and diagnostic information using must-gather If Red Hat OpenShift Data Foundation is unable to automatically resolve a problem, use the must-gather tool to collect log files and diagnostic information so that you or Red Hat support can review the problem and determine a solution. Important When Red Hat OpenShift Data Foundation is deployed in external mode, must-gather only collects logs from the OpenShift Data Foundation cluster and does not collect debug data and logs from the external Red Hat Ceph Storage cluster. To collect debug logs from the external Red Hat Ceph Storage cluster, see Red Hat Ceph Storage Troubleshooting guide and contact your Red Hat Ceph Storage Administrator. Prerequisites Optional: If OpenShift Data Foundation is deployed in a disconnected environment, ensure that you mirror the individual must-gather image to the mirror registry available from the disconnected environment. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. <path-to-the-registry-config> Is the path to your registry credentials, by default it is ~/.docker/config.json . --insecure Add this flag only if the mirror registry is insecure. For more information, see the Red Hat Knowledgebase solutions: How to mirror images between Redhat Openshift registries Failed to mirror OpenShift image repository when private registry is insecure Procedure Run the must-gather command from the client connected to the OpenShift Data Foundation cluster: <directory-name> Is the name of the directory where you want to write the data to. Important For a disconnected environment deployment, replace the image in --image parameter with the mirrored must-gather image. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. This collects the following information in the specified directory: All Red Hat OpenShift Data Foundation cluster related Custom Resources (CRs) with their namespaces. Pod logs of all the Red Hat OpenShift Data Foundation related pods. Output of some standard Ceph commands like Status, Cluster health, and others. 2.1. Variations of must-gather-commands If one or more master nodes are not in the Ready state, use --node-name to provide a master node that is Ready so that the must-gather pod can be safely scheduled. If you want to gather information from a specific time: To specify a relative time period for logs gathered, such as within 5 seconds or 2 days, add /usr/bin/gather since=<duration> : To specify a specific time to gather logs after, add /usr/bin/gather since-time=<rfc3339-timestamp> : Replace the example values in these commands as follows: <node-name> If one or more master nodes are not in the Ready state, use this parameter to provide the name of a master node that is still in the Ready state. This avoids scheduling errors by ensuring that the must-gather pod is not scheduled on a master node that is not ready. <directory-name> The directory to store information collected by must-gather . <duration> Specify the period of time to collect information from as a relative duration, for example, 5h (starting from 5 hours ago). <rfc3339-timestamp> Specify the period of time to collect information from as an RFC 3339 timestamp, for example, 2020-11-10T04:00:00+00:00 (starting from 4 am UTC on 11 Nov 2020). 2.2. Running must-gather in modular mode Red Hat OpenShift Data Foundation must-gather can take a long time to run in some environments. To avoid this, run must-gather in modular mode and collect only the resources you require using the following command: Replace < -arg> with one or more of the following arguments to specify the resources for which the must-gather logs is required. -o , --odf ODF logs (includes Ceph resources, namespaced resources, clusterscoped resources and Ceph logs) -d , --dr DR logs -n , --noobaa Noobaa logs -c , --ceph Ceph commands and pod logs -cl , --ceph-logs Ceph daemon, kernel, journal logs, and crash reports -ns , --namespaced namespaced resources -cs , --clusterscoped clusterscoped resources -pc , --provider openshift-storage-client logs from a provider/consumer cluster (includes all the logs under operator namespace, pods, deployments, secrets, configmap, and other resources) -h , --help Print help message
[ "oc image mirror registry.redhat.io/odf4/odf-must-gather-rhel9:v4.16 <local-registry> /odf4/odf-must-gather-rhel9:v4.16 [--registry-config= <path-to-the-registry-config> ] [--insecure=true]", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.16 --dest-dir= <directory-name>", "oc adm must-gather --image=<local-registry>/odf4/odf-must-gather-rhel9:v4.16 --dest-dir= <directory-name>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.16 --dest-dir=_<directory-name>_ --node-name=_<node-name>_", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.16 --dest-dir=_<directory-name>_ /usr/bin/gather since=<duration>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.16 --dest-dir=_<directory-name>_ /usr/bin/gather since-time=<rfc3339-timestamp>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.16 -- /usr/bin/gather <-arg>" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/troubleshooting_openshift_data_foundation/downloading-log-files-and-diagnostic-information_rhodf
B.10.2. RHBA-2012:0735 - corosync bug fix update
B.10.2. RHBA-2012:0735 - corosync bug fix update Updated corosync packages that fix one bug are now available for Red Hat Enterprise Linux 6 Extended Update Support. The corosync packages provide the Corosync Cluster Engine and C Application Programming Interfaces (APIs) for Red Hat Enterprise Linux cluster software. Bug Fix BZ# 828430 Previously, it was not possible to activate or deactivate debug logs at runtime due to memory corruption in the objdb structure. With this update, the debug logging can now be activated or deactivated on runtime, for example with the command "corosync-objctl -w logging.debug=off". All users of corosync are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhba-2012-0735