title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
β | url
stringlengths 79
342
|
---|---|---|---|
Chapter 1. Overview | Chapter 1. Overview 1.1. Introduction Cost-Optimized deployments of SAP S/4HANA systems play an important role in S/4HANA migration scenarios, especially in saving costs related to additional nodes. It is also critical if such systems are to be made highly available, in which case, the constraints need to be correctly configured. A typical Cost-Optimized setup for SAP S/4HANA High-Availability consists of 2 distinct components: SAP S/4HANA ASCS/ERS cluster resources Database: SAP HANA with System Replication Please refer to Automating SAP HANA Scale-Up System Replication using the RHEL HA Add-On , for more details. This article focuses on the setup of SAP S/4HANA HA environment where both SAP HANA System Replication and the ASCS and ERS instances are managed by a single cluster. This is done using the RHEL HA add-on and the corresponding HA solutions for SAP, available as part of RHEL for SAP Solutions. Note : Below is the architecture diagram of the example installation of a 2-node cluster setup which this article focuses on, with a separate section on the design and configuration of the additional SAP HANA instances with System Replication. Note that ASCS or SAP HANA primary instances can failover to the other node independently of each other. 1.2. Audience This document is intended for SAP and Red Hat certified or trained administrators and consultants who already have experience setting up highly available solutions using the Red Hat Enterprise Linux (RHEL) HA add-on or other clustering solutions. Access to both SAP Service Marketplace and Red Hat Customer Portal is required to be able to download software and additional documentation. Red Hat Professional Services is highly recommended to set up the cluster and customize the solution to meet the customer's data center requirements, which may be different than the solution presented in this document. 1.3. Concepts This document describes how to set up a Cost-Optimized, two-node cluster solution that conforms to the high availability guidelines established by SAP and Red Hat. It is based on Standalone Enqueue Server 2 (ENSA2), now the default installation in SAP S/4HANA 1809 or newer, on top of RHEL 8 for SAP Solutions or above, and highlights a scale-up SAP HANA instance that supports fully automated failover using SAP HANA System Replication. According to SAP, ENSA2 is the successor to Standalone Enqueue Server 1 (ENSA1). It is a component of the SAP lock concept and manages the lock table. This principle ensures the consistency of data in an ABAP system. During a failover with ENSA1, the ASCS instance is required to "follow" the Enqueue Replication Server (ERS). That is, the HA software had to start the ASCS instance on the host where the ERS instance is currently running. In contrast to ENSA1, the newer ENSA2 model and Enqueue Replicator 2 no longer have these restrictions. For more information on ENSA2, please refer to SAP OSS Note 2630416 - Support for Standalone Enqueue Server 2 . Additionally, the document will also highlight the SAP HANA Scale-Up instance, with fully automated failover using SAP HANA System Replication, where the SAP HANA promotable clone resources will run on each node as per set constraints. This article does NOT cover preparation of the RHEL system for SAP HANA installation, nor the SAP HANA installation procedure. For fast and error-free preparation of the systems for SAP S/4HANA and SAP HANA, we recommend using RHEL System Roles for SAP . Configuration of both is considered as a Cost-Optimized SAP S/4HANA with an Automated SAP HANA Scale-Up System Replication Environment. 1.4. Support Policies Please refer to Support Policies for RHEL High Availability Clusters - Management of SAP S/4HANA and Support Policies for RHEL High Availability Clusters - Management of SAP HANA in a Cluster for more details. This solution is supported subject to fulfilling the above policies. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_a_cost-optimized_sap_s4hana_ha_cluster_hana_system_replication_ensa2_using_the_rhel_ha_add-on/asmb_cco_overview_configuring-cost-optimized-sap |
Chapter 3. Deploying Kafka components using the AMQ Streams operator | Chapter 3. Deploying Kafka components using the AMQ Streams operator When installed on Openshift, the AMQ Streams operator makes Kafka components available for installation from the user interface. The following Kafka components are available for installation: Kafka Kafka Connect Kafka MirrorMaker Kafka MirrorMaker 2 Kafka Topic Kafka User Kafka Bridge Kafka Connector Kafka Rebalance You select the component and create an instance. As a minimum, you create a Kafka instance. This procedure describes how to create a Kafka instance using the default settings. You can configure the default installation specification before you perform the installation. The process is the same for creating instances of other Kafka components. Prerequisites The AMQ Streams operator is installed on the OpenShift cluster . Procedure Navigate in the web console to the Operators > Installed Operators page and click AMQ Streams to display the operator details. From Provided APIs , you can create instances of Kafka components. Click Create instance under Kafka to create a Kafka instance. By default, you'll create a Kafka cluster called my-cluster with three Kafka broker nodes and three ZooKeeper nodes. The cluster uses ephemeral storage. Click Create to start the installation of Kafka. Wait until the status changes to Ready . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/getting_started_with_amq_streams_on_openshift/proc-deploying-cluster-operator-kafka-str |
3.3. Defining Key Defaults in Profiles | 3.3. Defining Key Defaults in Profiles When creating certificate profiles, the Key Default must be added before the Subject Key Identifier Default . Certificate System processes the key constraints in the Key Default before creating or applying the Subject Key Identifier Default, so if the key has not been processed yet, setting the key in the subject name fails. For example, an object-signing profile may define both defaults: In the policyset list, then, the Key Default ( p11 ) must be listed before the Subject Key Identifier Default ( p3 ). | [
"policyset.set1.p3.constraint.class_id=noConstraintImpl policyset.set1.p3.constraint.name=No Constraint policyset.set1.p3.default.class_id=subjectKeyIdentifierExtDefaultImpl policyset.set1.p3.default.name=Subject Key Identifier Default policyset.set1.p11.constraint.class_id=keyConstraintImpl policyset.set1.p11.constraint.name=Key Constraint policyset.set1.p11.constraint.params.keyType=RSA policyset.set1.p11.constraint.params.keyParameters=1024,2048,3072,4096 policyset.set1.p11.default.class_id=userKeyDefaultImpl policyset.set1.p11.default.name=Key Default",
"policyset.set1.list=p1,p2, p11,p3 ,p4,p5,p6,p7,p8,p9,p10"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/key-policies-in-profiles |
Chapter 50. Next steps | Chapter 50. steps Getting started with decision services Designing a decision service using guided decision tables | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/next_steps_2 |
2.9. Additional Resources | 2.9. Additional Resources This chapter is only intended as an introduction to GRUB. Consult the following resources to discover more about how GRUB works. 2.9.1. Installed Documentation /usr/share/doc/grub- <version-number> / - This directory contains good information about using and configuring GRUB, where <version-number> corresponds to the version of the GRUB package installed. info grub - The GRUB info page contains a tutorial, a user reference manual, a programmer reference manual, and a FAQ document about GRUB and its usage. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-grub-additional-resources |
14.6. The (Non-Transactional) CarMart Quickstart in Remote Client-Server Mode (JBoss Enterprise Web Server) | 14.6. The (Non-Transactional) CarMart Quickstart in Remote Client-Server Mode (JBoss Enterprise Web Server) The Carmart (non-transactional) quickstart is supported for JBoss Data Grid's Remote Client-Server Mode with the JBoss Enterprise Web Server container. Report a bug 14.6.1. Build and Deploy the CarMart Quickstart in Remote Client-Server Mode This quickstart accesses Red Hat JBoss Data Grid via Hot Rod. This feature is not available for the Transactional CarMart quickstart. Important This quickstart deploys to JBoss Enterprise Web Server or Tomcat. The application cannot be deployed to JBoss Data Grid because it does not support application deployment. Prerequisites Prerequisites for this procedure are as follows: Obtain the most recent supported JBoss Data Grid Remote Client-Server Mode distribution files from Red Hat . Ensure that the JBoss Data Grid and JBoss Enterprise Application Platform Maven repositories are installed and configured. For details, see Chapter 3, Install and Use the Maven Repositories Add a server element to the Maven settings.xml file. In the id elements within server , add the appropriate tomcat credentials. Procedure 14.10. Build and Deploy the CarMart Quickstart in Remote Client-Server Mode Configure the Standalone File Add the following configuration to the standalone.xml file located in the USDJDG_HOME/standalone/configuration/ directory. Add the following configuration within the infinispan subsystem tags: Note If the carcache element already exists in your configuration, replace it with the provided configuration. Start the Container Start the JBoss server instance where your application will deploy. For Linux: For Windows: Build the Application Use the following command to build your application in the relevant directory: Deploy the Application Use the following command to deploy the application in the relevant directory: Report a bug 14.6.2. View the CarMart Quickstart in Remote Client-Server Mode The following procedure outlines how to view the CarMart quickstart in Red Hat JBoss Data Grid's Remote Client-Server Mode: Prerequisite The CarMart quickstart must be built and deployed be viewed. Procedure 14.11. View the CarMart Quickstart in Remote Client-Server Mode Visit the following link in a browser window to view the application: Report a bug 14.6.3. Remove the CarMart Quickstart in Remote Client-Server Mode The following procedure provides directions to remove an already deployed application in Red Hat JBoss Data Grid's Remote Client-Server mode. Procedure 14.12. Remove an Application in Remote Client-Server Mode To remove an application, use the following command from the root directory of this quickstart: Report a bug | [
"<server> <id>tomcat</id> <username>admin</username> <password>admin</password> </server>",
"<local-cache name=\"carcache\" start=\"EAGER\" batching=\"false\" statistics=\"true\"> <eviction strategy=\"LIRS\" max-entries=\"4\"/> </local-cache>",
"USDJBOSS_EWS_HOME/tomcat7/bin/catalina.sh run",
"USDJBOSS_EWS_HOME\\tomcat7\\bin\\catalina.bat run",
"mvn clean package -Premote-tomcat",
"mvn tomcat:deploy -Premote-tomcat",
"http://localhost:8080/jboss-carmart",
"mvn tomcat:undeploy -Premote-tomcat"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-the_non-transactional_carmart_quickstart_in_remote_client-server_mode_jboss_enterprise_web_server |
Chapter 7. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure | Chapter 7. Uninstalling a cluster on vSphere that uses installer-provisioned infrastructure You can remove a cluster that you deployed in your VMware vSphere instance by using installer-provisioned infrastructure. Note When you run the openshift-install destroy cluster command to uninstall OpenShift Container Platform, vSphere volumes are not automatically deleted. The cluster administrator must manually find the vSphere volumes and delete them. 7.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_vsphere/uninstalling-cluster-vsphere-installer-provisioned |
Chapter 2. Distributed tracing architecture | Chapter 2. Distributed tracing architecture 2.1. Distributed tracing architecture Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. Red Hat OpenShift distributed tracing platform lets you perform distributed tracing, which records the path of a request through various microservices that make up an application. Distributed tracing is a technique that is used to tie the information about different units of work together - usually executed in different processes or hosts - to understand a whole chain of events in a distributed transaction. Developers can visualize call flows in large microservice architectures with distributed tracing. It is valuable for understanding serialization, parallelism, and sources of latency. Red Hat OpenShift distributed tracing platform records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace is comprised of one or more spans. A span represents a logical unit of work in Red Hat OpenShift distributed tracing platform that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships. 2.1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis 2.1.2. Red Hat OpenShift distributed tracing platform features Red Hat OpenShift distributed tracing platform provides the following capabilities: Integration with Kiali - When properly configured, you can view distributed tracing platform data from the Kiali console. High scalability - The distributed tracing platform back end is designed to have no single points of failure and to scale with the business needs. Distributed Context Propagation - Enables you to connect data from different components together to create a complete end-to-end trace. Backwards compatibility with Zipkin - Red Hat OpenShift distributed tracing platform has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release. 2.1.3. Red Hat OpenShift distributed tracing platform architecture Red Hat OpenShift distributed tracing platform is made up of several components that work together to collect, store, and display tracing data. Red Hat OpenShift distributed tracing platform (Tempo) - This component is based on the open source Grafana Tempo project . Gateway - The Gateway handles authentication, authorization, and forwarding requests to the Distributor or Query front-end service. Distributor - The Distributor accepts spans in multiple formats including Jaeger, OpenTelemetry, and Zipkin. It routes spans to Ingesters by hashing the traceID and using a distributed consistent hash ring. Ingester - The Ingester batches a trace into blocks, creates bloom filters and indexes, and then flushes it all to the back end. Query Frontend - The Query Frontend is responsible for sharding the search space for an incoming query. The search query is then sent to the Queriers. The Query Frontend deployment exposes the Jaeger UI through the Tempo Query sidecar. Querier - The Querier is responsible for finding the requested trace ID in either the Ingesters or the back-end storage. Depending on parameters, it can query the Ingesters and pull Bloom indexes from the back end to search blocks in object storage. Compactor - The Compactors stream blocks to and from the back-end storage to reduce the total number of blocks. Red Hat build of OpenTelemetry - This component is based on the open source OpenTelemetry project . OpenTelemetry Collector - The OpenTelemetry Collector is a vendor-agnostic way to receive, process, and export telemetry data. The OpenTelemetry Collector supports open-source observability data formats, for example, Jaeger and Prometheus, sending to one or more open-source or commercial back-ends. The Collector is the default location instrumentation libraries export their telemetry data. Red Hat OpenShift distributed tracing platform (Jaeger) - This component is based on the open source Jaeger project . Important The Red Hat OpenShift distributed tracing platform (Jaeger) is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Client (Jaeger client, Tracer, Reporter, instrumented application, client libraries)- The distributed tracing platform (Jaeger) clients are language-specific implementations of the OpenTracing API. They might be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. Agent (Jaeger agent, Server Queue, Processor Workers) - The distributed tracing platform (Jaeger) agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the Collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments such as Kubernetes. Jaeger Collector (Collector, Queue, Workers) - Similar to the Jaeger agent, the Jaeger Collector receives spans and places them in an internal queue for processing. This allows the Jaeger Collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. Storage (Data Store) - Collectors require a persistent storage backend. Red Hat OpenShift distributed tracing platform (Jaeger) has a pluggable mechanism for span storage. Red Hat OpenShift distributed tracing platform (Jaeger) supports the Elasticsearch storage. Query (Query Service) - Query is a service that retrieves traces from storage. Ingester (Ingester Service) - Red Hat OpenShift distributed tracing platform can use Apache Kafka as a buffer between the Collector and the actual Elasticsearch backing storage. Ingester is a service that reads data from Kafka and writes to the Elasticsearch storage backend. Jaeger Console - With the Red Hat OpenShift distributed tracing platform (Jaeger) user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. 2.1.4. Additional resources Red Hat build of OpenTelemetry | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/distributed_tracing/distributed-tracing-architecture |
2.2.6. Securing FTP | 2.2.6. Securing FTP The File Transfer Protocol ( FTP ) is an older TCP protocol designed to transfer files over a network. Because all transactions with the server, including user authentication, are unencrypted, it is considered an insecure protocol and should be carefully configured. Red Hat Enterprise Linux provides three FTP servers. gssftpd - A Kerberos-aware xinetd -based FTP daemon that does not transmit authentication information over the network. Red Hat Content Accelerator ( tux ) - A kernel-space Web server with FTP capabilities. vsftpd - A standalone, security oriented implementation of the FTP service. The following security guidelines are for setting up the vsftpd FTP service. 2.2.6.1. FTP Greeting Banner Before submitting a user name and password, all users are presented with a greeting banner. By default, this banner includes version information useful to attackers trying to identify weaknesses in a system. To change the greeting banner for vsftpd , add the following directive to the /etc/vsftpd/vsftpd.conf file: Replace <insert_greeting_here> in the above directive with the text of the greeting message. For mutli-line banners, it is best to use a banner file. To simplify management of multiple banners, place all banners in a new directory called /etc/banners/ . The banner file for FTP connections in this example is /etc/banners/ftp.msg . Below is an example of what such a file may look like: Note It is not necessary to begin each line of the file with 220 as specified in Section 2.2.1.1.1, "TCP Wrappers and Connection Banners" . To reference this greeting banner file for vsftpd , add the following directive to the /etc/vsftpd/vsftpd.conf file: It also is possible to send additional banners to incoming connections using TCP Wrappers as described in Section 2.2.1.1.1, "TCP Wrappers and Connection Banners" . | [
"ftpd_banner= <insert_greeting_here>",
"######### Hello, all activity on ftp.example.com is logged. #########",
"banner_file=/etc/banners/ftp.msg"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-server_security-securing_ftp |
Chapter 22. Tracing Routes | Chapter 22. Tracing Routes Debugging a route often involves solving one of two problems: A message was improperly transformed. A message failed to reach its destination endpoint. Tracing one or more test messages through the route is the easiest way to discover the source of such problems. The tooling's route tracing feature enables you to monitor the path a message takes through a route and see how the message is transformed as it passes from processor to processor. The Diagram View displays a graphical representation of the route, which enables you to see the path a message takes through it. For each processor in a route, it also displays the average processing time, in milliseconds, for all messages processed since route start-up and the number of messages processed since route start-up. The Messages View displays the messages processed by a JMS destination or route endpoint selected in the JMX Navigator tree. Selecting an individual message trace in the Messages View displays the full details and content of the message in the Properties view and highlights the correspoding node in the Diagram View . Tracing messages through a route involves the following steps: Section 22.1, "Creating test messages for route tracing" Section 22.2, "Activating route tracing" Section 22.3, "Tracing messages through a routing context" Section 22.4, "Deactivating route tracing" 22.1. Creating test messages for route tracing Overview Route tracing works with any kind of message structure. The Fuse Message wizard creates an empty .xml message, leaving the structuring of the message entirely up to you. Note If the folder where you want to store the test messages does not exist, you need to create it before you create the messages. Creating a new folder to store test messages To create a new folder: In the Project Explorer view, right-click the project root to open the context menu. Select New Folder to open the New Folder wizard. The project root appears in the Enter or select the parent folder field. Expand the nodes in the graphical representation of the project's hierarchy, and select the node you want to be the parent folder. In the Folder name field, enter a name for the new folder. Click Finish . The new folder appears in the Project Explorer view, under the selected parent folder. Note If the new folder does not appear, right-click the parent foler and select Refresh . Creating a test message To create a test message: In the Project Explorer view, right-click the project to open the context menu. Select New Fuse Message to open the New File wizard. Expand the nodes in the graphical representation of the project's hierarchy, and select the folder in which you want to store the new test message. In the File name field, enter a name for the message, or accept the default ( message.xml ). Click Finish . The new message opens in the XML editor. Enter the message contents, both body and header text. Note You may see the warning, No grammar constraints (DTD or XML Schema) referenced in the document , depending on the header text you entered. You can safely ignore this warning. Related topics Section 22.3, "Tracing messages through a routing context" 22.2. Activating route tracing Overview You must activate route tracing for the routing context before you can trace messages through that routing context. Procedure To activate tracing on a routing context: In the JMX Navigator view, select the running routing context on which you want to start tracing. Note You can select any route in the context to start tracing on the entire context. Right-click the selected routing context to open the context menu, and then select Start Tracing to start the trace. If Stop Tracing Context is enabled on the context menu, then tracing is already active. Related topics Section 22.3, "Tracing messages through a routing context" Section 22.4, "Deactivating route tracing" 22.3. Tracing messages through a routing context Overview The best way to see what is happening in a routing context is to watch what happens to a message at each stop along the way. The tooling provides a mechanism for dropping messages into a running routing context and tracing the path the messages take through it. Procedure To trace messages through a routing context: Create one or more test messages as described in Section 22.1, "Creating test messages for route tracing" . In the Project Explorer view, right-click the project's Camel context file to open the context menu, and select Run As Local Camel Context (without Tests) . Note Do not run it as Local Camel Context unless you have created a comprehensive JUnit test for the project. Activate tracing for the running routing context as described in Section 22.2, "Activating route tracing" . Drag one of the test messages from the Project Explorer view onto the routing context's starting point in the JMX Navigator view. In the JMX Navigator view, select the routing context being traced. The tooling populates the Messages View with message instances that represent the message at each stage in the traced context. The Diagram View displays a graphical representation of the selected routing context. In the Messages View , select one of the message instances. The Properties view displays the details and content of the message instance. In the Diagram View , the route step corresponding to the selected message instance is highlighted. If the route step is a processing step, the tooling tags the exiting path with timing and processing metrics. Repeat this prodedure as needed. Related topics Section 22.1, "Creating test messages for route tracing" Section 22.2, "Activating route tracing" Section 22.4, "Deactivating route tracing" 22.4. Deactivating route tracing Overview When you are finished debugging the routes in a routing context, you should deactivate tracing. Important Deactivating tracing stops tracing and flushes the trace data for all of the routes in the routing context. This means that you cannot review any past tracing sessions. Procedure To stop tracing for a routing context: In the JMX Navigator view, select the running routing context for which you want to deactivate tracing. Note You can select any route in the context to stop tracing for the context. Right-click the selected routing context to open the context menu, and then select Stop Tracing Context . If Start Tracing appears on the context menu, tracing is not activated for the routing context. Related topics Section 22.2, "Activating route tracing" Section 22.3, "Tracing messages through a routing context" | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/RiderTracing |
Chapter 3. Customizing the Cryostat dashboard | Chapter 3. Customizing the Cryostat dashboard The Cryostat Dashboard displays information about target Java Virtual Machines (JVMs) in the form of cards on the user interface. You can configure the cards and customize different dashboard layouts according to your requirements. 3.1. Creating a custom dashboard layout Create customized layouts to organize the display of dashboard cards, according to your requirements. You can organize the cards in different configurations and create custom views to display the data and specific metrics that are most relevant to your current requirements. You can add, remove, and arrange the cards and switch between different layouts. You can also create layout templates that you can download, reuse, or share with other users so that they can access the same information and metrics. By using dashboard layouts, you do not need to modify your dashboard manually each time you want to view different information. Prerequisites Created a Cryostat instance in your project. Logged in to your Cryostat web console. Created a target JVM to monitor. Procedure On the Cryostat web console, click Dashboard . On the toolbar, click the layout selector dropdown menu. Click New Layout . Figure 3.1. Creating a new dashboard layout The new layout is assigned a default name. To specify a different name, click the pencil icon beside the name. (Optional): To select an existing template or upload a new one, click the expandable menu on the New Layout button. Figure 3.2. Creating a new dashboard layout by using a template (Optional): To set or download a layout as a template or to clear the layout, click the more options icon ( ... ): Figure 3.3. Setting or downloading a layout as a template or clearing the layout To set the current layout as a template, select Set as template . To download the current layout as a template, select Download as template . The template is downloaded as a .json file. To clear the current layout, select Clear layout . A confirmation dialog then opens. To confirm that you want to permanently clear the current dashboard layout, click Clear . Figure 3.4. Clearing a dashboard layout 3.2. Adding cards to a dashboard layout You can select and configure the cards you want to add to the Cryostat Dashboard . Each card displays a different set of information or metrics about the target JVM you select. Prerequisites Created a Cryostat instance in your project. Logged in to your Cryostat web console. Created a target JVM to monitor. Procedure On the Cryostat web console, click Dashboard . From the Target dropdown menu, select the target JVM whose information you want to view. To add a dashboard card, click the Add card icon. The Dashboard card catalog window opens. From the available cards types, select a card to add to your dashboard layout and click Finish . Repeat this step for each card that you want to add. Note Some cards require additional configuration, for example, the MBeans Metrics Chart card. In this instance, click to access the configuration wizard, specify the values you require, then click Finish . 3.3. Restoring a dashboard layout You can upload previously saved dashboard layouts that are stored in .json files to the Cryostat Dashboard . Prerequisites Created a Cryostat instance in your project. Logged in to your Cryostat web console. Created a target JVM to monitor. Downloaded a saved dashboard layout in .json file format. Procedure On the Cryostat web console, click Dashboard . From the Target drop-down menu, select the target JVM whose information you want to view. On the toolbar, click the layout selector drop-down menu. Click the expandable menu on the New Layout button and select Upload Template . Figure 3.5. Uploading a dashboard template Click Upload to browse your local directory for previously saved dashboard layouts. Figure 3.6. Selecting a dashboard template for upload When you have selected the template .json file you want to upload, click Submit . On the toolbar, click the layout selector drop-down menu. Click the expandable menu on the New Layout button and select Choose Template . Figure 3.7. Choosing a dashboard template Scroll to the user-submitted dashboard layouts and click the template name that you uploaded in Step 5 . In the Name field, enter a name for the dashboard layout. Click Create . Revised on 2024-07-02 14:01:19 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_the_cryostat_dashboard/assembly_customizing-dashboard_con_dashboard-cards |
Chapter 15. Infrastructure [config.openshift.io/v1] | Chapter 15. Infrastructure [config.openshift.io/v1] Description Infrastructure holds cluster-wide information about Infrastructure. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 15.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description cloudConfig object cloudConfig is a reference to a ConfigMap containing the cloud provider configuration file. This configuration file is used to configure the Kubernetes cloud provider integration when using the built-in cloud provider integration or the external cloud controller manager. The namespace for this config map is openshift-config. cloudConfig should only be consumed by the kube_cloud_config controller. The controller is responsible for using the user configuration in the spec for various platforms and combining that with the user provided ConfigMap in this field to create a stitched kube cloud config. The controller generates a ConfigMap kube-cloud-config in openshift-config-managed namespace with the kube cloud config is stored in cloud.conf key. All the clients are expected to use the generated ConfigMap only. platformSpec object platformSpec holds desired information specific to the underlying infrastructure provider. 15.1.2. .spec.cloudConfig Description cloudConfig is a reference to a ConfigMap containing the cloud provider configuration file. This configuration file is used to configure the Kubernetes cloud provider integration when using the built-in cloud provider integration or the external cloud controller manager. The namespace for this config map is openshift-config. cloudConfig should only be consumed by the kube_cloud_config controller. The controller is responsible for using the user configuration in the spec for various platforms and combining that with the user provided ConfigMap in this field to create a stitched kube cloud config. The controller generates a ConfigMap kube-cloud-config in openshift-config-managed namespace with the kube cloud config is stored in cloud.conf key. All the clients are expected to use the generated ConfigMap only. Type object Property Type Description key string Key allows pointing to a specific key/value inside of the configmap. This is useful for logical file references. name string 15.1.3. .spec.platformSpec Description platformSpec holds desired information specific to the underlying infrastructure provider. Type object Property Type Description alibabaCloud object AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. aws object AWS contains settings specific to the Amazon Web Services infrastructure provider. azure object Azure contains settings specific to the Azure infrastructure provider. baremetal object BareMetal contains settings specific to the BareMetal platform. equinixMetal object EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. external object ExternalPlatformType represents generic infrastructure provider. Platform-specific components should be supplemented separately. gcp object GCP contains settings specific to the Google Cloud Platform infrastructure provider. ibmcloud object IBMCloud contains settings specific to the IBMCloud infrastructure provider. kubevirt object Kubevirt contains settings specific to the kubevirt infrastructure provider. nutanix object Nutanix contains settings specific to the Nutanix infrastructure provider. openstack object OpenStack contains settings specific to the OpenStack infrastructure provider. ovirt object Ovirt contains settings specific to the oVirt infrastructure provider. powervs object PowerVS contains settings specific to the IBM Power Systems Virtual Servers infrastructure provider. type string type is the underlying infrastructure provider for the cluster. This value controls whether infrastructure automation such as service load balancers, dynamic volume provisioning, machine creation and deletion, and other integrations are enabled. If None, no infrastructure automation is enabled. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "Libvirt", "OpenStack", "VSphere", "oVirt", "KubeVirt", "EquinixMetal", "PowerVS", "AlibabaCloud", "Nutanix" and "None". Individual components may not support all platforms, and must handle unrecognized platforms as None if they do not support that platform. vsphere object VSphere contains settings specific to the VSphere infrastructure provider. 15.1.4. .spec.platformSpec.alibabaCloud Description AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. Type object 15.1.5. .spec.platformSpec.aws Description AWS contains settings specific to the Amazon Web Services infrastructure provider. Type object Property Type Description serviceEndpoints array serviceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. serviceEndpoints[] object AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. 15.1.6. .spec.platformSpec.aws.serviceEndpoints Description serviceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. Type array 15.1.7. .spec.platformSpec.aws.serviceEndpoints[] Description AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. Type object Property Type Description name string name is the name of the AWS service. The list of all the service names can be found at https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html This must be provided and cannot be empty. url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.8. .spec.platformSpec.azure Description Azure contains settings specific to the Azure infrastructure provider. Type object 15.1.9. .spec.platformSpec.baremetal Description BareMetal contains settings specific to the BareMetal platform. Type object 15.1.10. .spec.platformSpec.equinixMetal Description EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. Type object 15.1.11. .spec.platformSpec.external Description ExternalPlatformType represents generic infrastructure provider. Platform-specific components should be supplemented separately. Type object Property Type Description platformName string PlatformName holds the arbitrary string representing the infrastructure provider name, expected to be set at the installation time. This field is solely for informational and reporting purposes and is not expected to be used for decision-making. 15.1.12. .spec.platformSpec.gcp Description GCP contains settings specific to the Google Cloud Platform infrastructure provider. Type object 15.1.13. .spec.platformSpec.ibmcloud Description IBMCloud contains settings specific to the IBMCloud infrastructure provider. Type object 15.1.14. .spec.platformSpec.kubevirt Description Kubevirt contains settings specific to the kubevirt infrastructure provider. Type object 15.1.15. .spec.platformSpec.nutanix Description Nutanix contains settings specific to the Nutanix infrastructure provider. Type object Required prismCentral prismElements Property Type Description prismCentral object prismCentral holds the endpoint address and port to access the Nutanix Prism Central. When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. prismElements array prismElements holds one or more endpoint address and port data to access the Nutanix Prism Elements (clusters) of the Nutanix Prism Central. Currently we only support one Prism Element (cluster) for an OpenShift cluster, where all the Nutanix resources (VMs, subnets, volumes, etc.) used in the OpenShift cluster are located. In the future, we may support Nutanix resources (VMs, etc.) spread over multiple Prism Elements (clusters) of the Prism Central. prismElements[] object NutanixPrismElementEndpoint holds the name and endpoint data for a Prism Element (cluster) 15.1.16. .spec.platformSpec.nutanix.prismCentral Description prismCentral holds the endpoint address and port to access the Nutanix Prism Central. When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. Type object Required address port Property Type Description address string address is the endpoint address (DNS name or IP address) of the Nutanix Prism Central or Element (cluster) port integer port is the port number to access the Nutanix Prism Central or Element (cluster) 15.1.17. .spec.platformSpec.nutanix.prismElements Description prismElements holds one or more endpoint address and port data to access the Nutanix Prism Elements (clusters) of the Nutanix Prism Central. Currently we only support one Prism Element (cluster) for an OpenShift cluster, where all the Nutanix resources (VMs, subnets, volumes, etc.) used in the OpenShift cluster are located. In the future, we may support Nutanix resources (VMs, etc.) spread over multiple Prism Elements (clusters) of the Prism Central. Type array 15.1.18. .spec.platformSpec.nutanix.prismElements[] Description NutanixPrismElementEndpoint holds the name and endpoint data for a Prism Element (cluster) Type object Required endpoint name Property Type Description endpoint object endpoint holds the endpoint address and port data of the Prism Element (cluster). When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. name string name is the name of the Prism Element (cluster). This value will correspond with the cluster field configured on other resources (eg Machines, PVCs, etc). 15.1.19. .spec.platformSpec.nutanix.prismElements[].endpoint Description endpoint holds the endpoint address and port data of the Prism Element (cluster). When a cluster-wide proxy is installed, by default, this endpoint will be accessed via the proxy. Should you wish for communication with this endpoint not to be proxied, please add the endpoint to the proxy spec.noProxy list. Type object Required address port Property Type Description address string address is the endpoint address (DNS name or IP address) of the Nutanix Prism Central or Element (cluster) port integer port is the port number to access the Nutanix Prism Central or Element (cluster) 15.1.20. .spec.platformSpec.openstack Description OpenStack contains settings specific to the OpenStack infrastructure provider. Type object 15.1.21. .spec.platformSpec.ovirt Description Ovirt contains settings specific to the oVirt infrastructure provider. Type object 15.1.22. .spec.platformSpec.powervs Description PowerVS contains settings specific to the IBM Power Systems Virtual Servers infrastructure provider. Type object Property Type Description serviceEndpoints array serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. serviceEndpoints[] object PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. 15.1.23. .spec.platformSpec.powervs.serviceEndpoints Description serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. Type array 15.1.24. .spec.platformSpec.powervs.serviceEndpoints[] Description PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. Type object Required name url Property Type Description name string name is the name of the Power VS service. Few of the services are IAM - https://cloud.ibm.com/apidocs/iam-identity-token-api ResourceController - https://cloud.ibm.com/apidocs/resource-controller/resource-controller Power Cloud - https://cloud.ibm.com/apidocs/power-cloud url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.25. .spec.platformSpec.vsphere Description VSphere contains settings specific to the VSphere infrastructure provider. Type object Property Type Description failureDomains array failureDomains contains the definition of region, zone and the vCenter topology. If this is omitted failure domains (regions and zones) will not be used. failureDomains[] object VSpherePlatformFailureDomainSpec holds the region and zone failure domain and the vCenter topology of that failure domain. nodeNetworking object nodeNetworking contains the definition of internal and external network constraints for assigning the node's networking. If this field is omitted, networking defaults to the legacy address selection behavior which is to only support a single address and return the first one found. vcenters array vcenters holds the connection details for services to communicate with vCenter. Currently, only a single vCenter is supported. --- vcenters[] object VSpherePlatformVCenterSpec stores the vCenter connection fields. This is used by the vSphere CCM. 15.1.26. .spec.platformSpec.vsphere.failureDomains Description failureDomains contains the definition of region, zone and the vCenter topology. If this is omitted failure domains (regions and zones) will not be used. Type array 15.1.27. .spec.platformSpec.vsphere.failureDomains[] Description VSpherePlatformFailureDomainSpec holds the region and zone failure domain and the vCenter topology of that failure domain. Type object Required name region server topology zone Property Type Description name string name defines the arbitrary but unique name of a failure domain. region string region defines the name of a region tag that will be attached to a vCenter datacenter. The tag category in vCenter must be named openshift-region. server string server is the fully-qualified domain name or the IP address of the vCenter server. --- topology object Topology describes a given failure domain using vSphere constructs zone string zone defines the name of a zone tag that will be attached to a vCenter cluster. The tag category in vCenter must be named openshift-zone. 15.1.28. .spec.platformSpec.vsphere.failureDomains[].topology Description Topology describes a given failure domain using vSphere constructs Type object Required computeCluster datacenter datastore networks Property Type Description computeCluster string computeCluster the absolute path of the vCenter cluster in which virtual machine will be located. The absolute path is of the form /<datacenter>/host/<cluster>. The maximum length of the path is 2048 characters. datacenter string datacenter is the name of vCenter datacenter in which virtual machines will be located. The maximum length of the datacenter name is 80 characters. datastore string datastore is the absolute path of the datastore in which the virtual machine is located. The absolute path is of the form /<datacenter>/datastore/<datastore> The maximum length of the path is 2048 characters. folder string folder is the absolute path of the folder where virtual machines are located. The absolute path is of the form /<datacenter>/vm/<folder>. The maximum length of the path is 2048 characters. networks array (string) networks is the list of port group network names within this failure domain. Currently, we only support a single interface per RHCOS virtual machine. The available networks (port groups) can be listed using govc ls 'network/*' The single interface should be the absolute path of the form /<datacenter>/network/<portgroup>. resourcePool string resourcePool is the absolute path of the resource pool where virtual machines will be created. The absolute path is of the form /<datacenter>/host/<cluster>/Resources/<resourcepool>. The maximum length of the path is 2048 characters. 15.1.29. .spec.platformSpec.vsphere.nodeNetworking Description nodeNetworking contains the definition of internal and external network constraints for assigning the node's networking. If this field is omitted, networking defaults to the legacy address selection behavior which is to only support a single address and return the first one found. Type object Property Type Description external object external represents the network configuration of the node that is externally routable. internal object internal represents the network configuration of the node that is routable only within the cluster. 15.1.30. .spec.platformSpec.vsphere.nodeNetworking.external Description external represents the network configuration of the node that is externally routable. Type object Property Type Description excludeNetworkSubnetCidr array (string) excludeNetworkSubnetCidr IP addresses in subnet ranges will be excluded when selecting the IP address from the VirtualMachine's VM for use in the status.addresses fields. --- network string network VirtualMachine's VM Network names that will be used to when searching for status.addresses fields. Note that if internal.networkSubnetCIDR and external.networkSubnetCIDR are not set, then the vNIC associated to this network must only have a single IP address assigned to it. The available networks (port groups) can be listed using govc ls 'network/*' networkSubnetCidr array (string) networkSubnetCidr IP address on VirtualMachine's network interfaces included in the fields' CIDRs that will be used in respective status.addresses fields. --- 15.1.31. .spec.platformSpec.vsphere.nodeNetworking.internal Description internal represents the network configuration of the node that is routable only within the cluster. Type object Property Type Description excludeNetworkSubnetCidr array (string) excludeNetworkSubnetCidr IP addresses in subnet ranges will be excluded when selecting the IP address from the VirtualMachine's VM for use in the status.addresses fields. --- network string network VirtualMachine's VM Network names that will be used to when searching for status.addresses fields. Note that if internal.networkSubnetCIDR and external.networkSubnetCIDR are not set, then the vNIC associated to this network must only have a single IP address assigned to it. The available networks (port groups) can be listed using govc ls 'network/*' networkSubnetCidr array (string) networkSubnetCidr IP address on VirtualMachine's network interfaces included in the fields' CIDRs that will be used in respective status.addresses fields. --- 15.1.32. .spec.platformSpec.vsphere.vcenters Description vcenters holds the connection details for services to communicate with vCenter. Currently, only a single vCenter is supported. --- Type array 15.1.33. .spec.platformSpec.vsphere.vcenters[] Description VSpherePlatformVCenterSpec stores the vCenter connection fields. This is used by the vSphere CCM. Type object Required datacenters server Property Type Description datacenters array (string) The vCenter Datacenters in which the RHCOS vm guests are located. This field will be used by the Cloud Controller Manager. Each datacenter listed here should be used within a topology. port integer port is the TCP port that will be used to communicate to the vCenter endpoint. When omitted, this means the user has no opinion and it is up to the platform to choose a sensible default, which is subject to change over time. server string server is the fully-qualified domain name or the IP address of the vCenter server. --- 15.1.34. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description apiServerInternalURI string apiServerInternalURL is a valid URI with scheme 'https', address and optionally a port (defaulting to 443). apiServerInternalURL can be used by components like kubelets, to contact the Kubernetes API server using the infrastructure provider rather than Kubernetes networking. apiServerURL string apiServerURL is a valid URI with scheme 'https', address and optionally a port (defaulting to 443). apiServerURL can be used by components like the web console to tell users where to find the Kubernetes API. controlPlaneTopology string controlPlaneTopology expresses the expectations for operands that normally run on control nodes. The default is 'HighlyAvailable', which represents the behavior operators have in a "normal" cluster. The 'SingleReplica' mode will be used in single-node deployments and the operators should not configure the operand for highly-available operation The 'External' mode indicates that the control plane is hosted externally to the cluster and that its components are not visible within the cluster. etcdDiscoveryDomain string etcdDiscoveryDomain is the domain used to fetch the SRV records for discovering etcd servers and clients. For more info: https://github.com/etcd-io/etcd/blob/329be66e8b3f9e2e6af83c123ff89297e49ebd15/Documentation/op-guide/clustering.md#dns-discovery deprecated: as of 4.7, this field is no longer set or honored. It will be removed in a future release. infrastructureName string infrastructureName uniquely identifies a cluster with a human friendly name. Once set it should not be changed. Must be of max length 27 and must have only alphanumeric or hyphen characters. infrastructureTopology string infrastructureTopology expresses the expectations for infrastructure services that do not run on control plane nodes, usually indicated by a node selector for a role value other than master . The default is 'HighlyAvailable', which represents the behavior operators have in a "normal" cluster. The 'SingleReplica' mode will be used in single-node deployments and the operators should not configure the operand for highly-available operation NOTE: External topology mode is not applicable for this field. platform string platform is the underlying infrastructure provider for the cluster. Deprecated: Use platformStatus.type instead. platformStatus object platformStatus holds status information specific to the underlying infrastructure provider. 15.1.35. .status.platformStatus Description platformStatus holds status information specific to the underlying infrastructure provider. Type object Property Type Description alibabaCloud object AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. aws object AWS contains settings specific to the Amazon Web Services infrastructure provider. azure object Azure contains settings specific to the Azure infrastructure provider. baremetal object BareMetal contains settings specific to the BareMetal platform. equinixMetal object EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. external object External contains settings specific to the generic External infrastructure provider. gcp object GCP contains settings specific to the Google Cloud Platform infrastructure provider. ibmcloud object IBMCloud contains settings specific to the IBMCloud infrastructure provider. kubevirt object Kubevirt contains settings specific to the kubevirt infrastructure provider. nutanix object Nutanix contains settings specific to the Nutanix infrastructure provider. openstack object OpenStack contains settings specific to the OpenStack infrastructure provider. ovirt object Ovirt contains settings specific to the oVirt infrastructure provider. powervs object PowerVS contains settings specific to the Power Systems Virtual Servers infrastructure provider. type string type is the underlying infrastructure provider for the cluster. This value controls whether infrastructure automation such as service load balancers, dynamic volume provisioning, machine creation and deletion, and other integrations are enabled. If None, no infrastructure automation is enabled. Allowed values are "AWS", "Azure", "BareMetal", "GCP", "Libvirt", "OpenStack", "VSphere", "oVirt", "EquinixMetal", "PowerVS", "AlibabaCloud", "Nutanix" and "None". Individual components may not support all platforms, and must handle unrecognized platforms as None if they do not support that platform. This value will be synced with to the status.platform and status.platformStatus.type . Currently this value cannot be changed once set. vsphere object VSphere contains settings specific to the VSphere infrastructure provider. 15.1.36. .status.platformStatus.alibabaCloud Description AlibabaCloud contains settings specific to the Alibaba Cloud infrastructure provider. Type object Required region Property Type Description region string region specifies the region for Alibaba Cloud resources created for the cluster. resourceGroupID string resourceGroupID is the ID of the resource group for the cluster. resourceTags array resourceTags is a list of additional tags to apply to Alibaba Cloud resources created for the cluster. resourceTags[] object AlibabaCloudResourceTag is the set of tags to add to apply to resources. 15.1.37. .status.platformStatus.alibabaCloud.resourceTags Description resourceTags is a list of additional tags to apply to Alibaba Cloud resources created for the cluster. Type array 15.1.38. .status.platformStatus.alibabaCloud.resourceTags[] Description AlibabaCloudResourceTag is the set of tags to add to apply to resources. Type object Required key value Property Type Description key string key is the key of the tag. value string value is the value of the tag. 15.1.39. .status.platformStatus.aws Description AWS contains settings specific to the Amazon Web Services infrastructure provider. Type object Property Type Description region string region holds the default AWS region for new AWS resources created by the cluster. resourceTags array resourceTags is a list of additional tags to apply to AWS resources created for the cluster. See https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html for information on tagging AWS resources. AWS supports a maximum of 50 tags per resource. OpenShift reserves 25 tags for its use, leaving 25 tags available for the user. resourceTags[] object AWSResourceTag is a tag to apply to AWS resources created for the cluster. serviceEndpoints array ServiceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. serviceEndpoints[] object AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. 15.1.40. .status.platformStatus.aws.resourceTags Description resourceTags is a list of additional tags to apply to AWS resources created for the cluster. See https://docs.aws.amazon.com/general/latest/gr/aws_tagging.html for information on tagging AWS resources. AWS supports a maximum of 50 tags per resource. OpenShift reserves 25 tags for its use, leaving 25 tags available for the user. Type array 15.1.41. .status.platformStatus.aws.resourceTags[] Description AWSResourceTag is a tag to apply to AWS resources created for the cluster. Type object Required key value Property Type Description key string key is the key of the tag value string value is the value of the tag. Some AWS service do not support empty values. Since tags are added to resources in many services, the length of the tag value must meet the requirements of all services. 15.1.42. .status.platformStatus.aws.serviceEndpoints Description ServiceEndpoints list contains custom endpoints which will override default service endpoint of AWS Services. There must be only one ServiceEndpoint for a service. Type array 15.1.43. .status.platformStatus.aws.serviceEndpoints[] Description AWSServiceEndpoint store the configuration of a custom url to override existing defaults of AWS Services. Type object Property Type Description name string name is the name of the AWS service. The list of all the service names can be found at https://docs.aws.amazon.com/general/latest/gr/aws-service-information.html This must be provided and cannot be empty. url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.44. .status.platformStatus.azure Description Azure contains settings specific to the Azure infrastructure provider. Type object Property Type Description armEndpoint string armEndpoint specifies a URL to use for resource management in non-soverign clouds such as Azure Stack. cloudName string cloudName is the name of the Azure cloud environment which can be used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the value is equal to AzurePublicCloud . networkResourceGroupName string networkResourceGroupName is the Resource Group for network resources like the Virtual Network and Subnets used by the cluster. If empty, the value is same as ResourceGroupName. resourceGroupName string resourceGroupName is the Resource Group for new Azure resources created for the cluster. resourceTags array resourceTags is a list of additional tags to apply to Azure resources created for the cluster. See https://docs.microsoft.com/en-us/rest/api/resources/tags for information on tagging Azure resources. Due to limitations on Automation, Content Delivery Network, DNS Azure resources, a maximum of 15 tags may be applied. OpenShift reserves 5 tags for internal use, allowing 10 tags for user configuration. resourceTags[] object AzureResourceTag is a tag to apply to Azure resources created for the cluster. 15.1.45. .status.platformStatus.azure.resourceTags Description resourceTags is a list of additional tags to apply to Azure resources created for the cluster. See https://docs.microsoft.com/en-us/rest/api/resources/tags for information on tagging Azure resources. Due to limitations on Automation, Content Delivery Network, DNS Azure resources, a maximum of 15 tags may be applied. OpenShift reserves 5 tags for internal use, allowing 10 tags for user configuration. Type array 15.1.46. .status.platformStatus.azure.resourceTags[] Description AzureResourceTag is a tag to apply to Azure resources created for the cluster. Type object Required key value Property Type Description key string key is the key part of the tag. A tag key can have a maximum of 128 characters and cannot be empty. Key must begin with a letter, end with a letter, number or underscore, and must contain only alphanumeric characters and the following special characters _ . - . value string value is the value part of the tag. A tag value can have a maximum of 256 characters and cannot be empty. Value must contain only alphanumeric characters and the following special characters _ + , - . / : ; < = > ? @ . 15.1.47. .status.platformStatus.baremetal Description BareMetal contains settings specific to the BareMetal platform. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for BareMetal deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.1.48. .status.platformStatus.equinixMetal Description EquinixMetal contains settings specific to the Equinix Metal infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. 15.1.49. .status.platformStatus.external Description External contains settings specific to the generic External infrastructure provider. Type object 15.1.50. .status.platformStatus.gcp Description GCP contains settings specific to the Google Cloud Platform infrastructure provider. Type object Property Type Description projectID string resourceGroupName is the Project ID for new GCP resources created for the cluster. region string region holds the region for new GCP resources created for the cluster. 15.1.51. .status.platformStatus.ibmcloud Description IBMCloud contains settings specific to the IBMCloud infrastructure provider. Type object Property Type Description cisInstanceCRN string CISInstanceCRN is the CRN of the Cloud Internet Services instance managing the DNS zone for the cluster's base domain dnsInstanceCRN string DNSInstanceCRN is the CRN of the DNS Services instance managing the DNS zone for the cluster's base domain location string Location is where the cluster has been deployed providerType string ProviderType indicates the type of cluster that was created resourceGroupName string ResourceGroupName is the Resource Group for new IBMCloud resources created for the cluster. 15.1.52. .status.platformStatus.kubevirt Description Kubevirt contains settings specific to the kubevirt infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. 15.1.53. .status.platformStatus.nutanix Description Nutanix contains settings specific to the Nutanix infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. 15.1.54. .status.platformStatus.openstack Description OpenStack contains settings specific to the OpenStack infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. cloudName string cloudName is the name of the desired OpenStack cloud in the client configuration file ( clouds.yaml ). ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for OpenStack deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.1.55. .status.platformStatus.ovirt Description Ovirt contains settings specific to the oVirt infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. nodeDNSIP string deprecated: as of 4.6, this field is no longer set or honored. It will be removed in a future release. 15.1.56. .status.platformStatus.powervs Description PowerVS contains settings specific to the Power Systems Virtual Servers infrastructure provider. Type object Property Type Description cisInstanceCRN string CISInstanceCRN is the CRN of the Cloud Internet Services instance managing the DNS zone for the cluster's base domain dnsInstanceCRN string DNSInstanceCRN is the CRN of the DNS Services instance managing the DNS zone for the cluster's base domain region string region holds the default Power VS region for new Power VS resources created by the cluster. resourceGroup string resourceGroup is the resource group name for new IBMCloud resources created for a cluster. The resource group specified here will be used by cluster-image-registry-operator to set up a COS Instance in IBMCloud for the cluster registry. More about resource groups can be found here: https://cloud.ibm.com/docs/account?topic=account-rgs . When omitted, the image registry operator won't be able to configure storage, which results in the image registry cluster operator not being in an available state. serviceEndpoints array serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. serviceEndpoints[] object PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. zone string zone holds the default zone for the new Power VS resources created by the cluster. Note: Currently only single-zone OCP clusters are supported 15.1.57. .status.platformStatus.powervs.serviceEndpoints Description serviceEndpoints is a list of custom endpoints which will override the default service endpoints of a Power VS service. Type array 15.1.58. .status.platformStatus.powervs.serviceEndpoints[] Description PowervsServiceEndpoint stores the configuration of a custom url to override existing defaults of PowerVS Services. Type object Required name url Property Type Description name string name is the name of the Power VS service. Few of the services are IAM - https://cloud.ibm.com/apidocs/iam-identity-token-api ResourceController - https://cloud.ibm.com/apidocs/resource-controller/resource-controller Power Cloud - https://cloud.ibm.com/apidocs/power-cloud url string url is fully qualified URI with scheme https, that overrides the default generated endpoint for a client. This must be provided and cannot be empty. 15.1.59. .status.platformStatus.vsphere Description VSphere contains settings specific to the VSphere infrastructure provider. Type object Property Type Description apiServerInternalIP string apiServerInternalIP is an IP address to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. It is the IP that the Infrastructure.status.apiServerInternalURI points to. It is the IP for a self-hosted load balancer in front of the API servers. Deprecated: Use APIServerInternalIPs instead. apiServerInternalIPs array (string) apiServerInternalIPs are the IP addresses to contact the Kubernetes API server that can be used by components inside the cluster, like kubelets using the infrastructure rather than Kubernetes networking. These are the IPs for a self-hosted load balancer in front of the API servers. In dual stack clusters this list contains two IPs otherwise only one. ingressIP string ingressIP is an external IP which routes to the default ingress controller. The IP is a suitable target of a wildcard DNS record used to resolve default route host names. Deprecated: Use IngressIPs instead. ingressIPs array (string) ingressIPs are the external IPs which route to the default ingress controller. The IPs are suitable targets of a wildcard DNS record used to resolve default route host names. In dual stack clusters this list contains two IPs otherwise only one. nodeDNSIP string nodeDNSIP is the IP address for the internal DNS used by the nodes. Unlike the one managed by the DNS operator, NodeDNSIP provides name resolution for the nodes themselves. There is no DNS-as-a-service for vSphere deployments. In order to minimize necessary changes to the datacenter DNS, a DNS service is hosted as a static pod to serve those hostnames to the nodes in the cluster. 15.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/infrastructures DELETE : delete collection of Infrastructure GET : list objects of kind Infrastructure POST : create an Infrastructure /apis/config.openshift.io/v1/infrastructures/{name} DELETE : delete an Infrastructure GET : read the specified Infrastructure PATCH : partially update the specified Infrastructure PUT : replace the specified Infrastructure /apis/config.openshift.io/v1/infrastructures/{name}/status GET : read status of the specified Infrastructure PATCH : partially update status of the specified Infrastructure PUT : replace status of the specified Infrastructure 15.2.1. /apis/config.openshift.io/v1/infrastructures Table 15.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Infrastructure Table 15.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Infrastructure Table 15.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 15.5. HTTP responses HTTP code Reponse body 200 - OK InfrastructureList schema 401 - Unauthorized Empty HTTP method POST Description create an Infrastructure Table 15.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.7. Body parameters Parameter Type Description body Infrastructure schema Table 15.8. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 202 - Accepted Infrastructure schema 401 - Unauthorized Empty 15.2.2. /apis/config.openshift.io/v1/infrastructures/{name} Table 15.9. Global path parameters Parameter Type Description name string name of the Infrastructure Table 15.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an Infrastructure Table 15.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 15.12. Body parameters Parameter Type Description body DeleteOptions schema Table 15.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Infrastructure Table 15.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.15. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Infrastructure Table 15.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.17. Body parameters Parameter Type Description body Patch schema Table 15.18. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Infrastructure Table 15.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.20. Body parameters Parameter Type Description body Infrastructure schema Table 15.21. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 401 - Unauthorized Empty 15.2.3. /apis/config.openshift.io/v1/infrastructures/{name}/status Table 15.22. Global path parameters Parameter Type Description name string name of the Infrastructure Table 15.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Infrastructure Table 15.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 15.25. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Infrastructure Table 15.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.27. Body parameters Parameter Type Description body Patch schema Table 15.28. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Infrastructure Table 15.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.30. Body parameters Parameter Type Description body Infrastructure schema Table 15.31. HTTP responses HTTP code Reponse body 200 - OK Infrastructure schema 201 - Created Infrastructure schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/config_apis/infrastructure-config-openshift-io-v1 |
Chapter 2. Creating the required Alibaba Cloud resources | Chapter 2. Creating the required Alibaba Cloud resources Before you install OpenShift Container Platform, you must use the Alibaba Cloud console to create a Resource Access Management (RAM) user that has sufficient permissions to install OpenShift Container Platform into your Alibaba Cloud. This user must also have permissions to create new RAM users. You can also configure and use the ccoctl tool to create new credentials for the OpenShift Container Platform components with the permissions that they require. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.1. Creating the required RAM user You must have a Alibaba Cloud Resource Access Management (RAM) user for the installation that has sufficient privileges. You can use the Alibaba Cloud Resource Access Management console to create a new user or modify an existing user. Later, you create credentials in OpenShift Container Platform based on this user's permissions. When you configure the RAM user, be sure to consider the following requirements: The user must have an Alibaba Cloud AccessKey ID and AccessKey secret pair. For a new user, you can select Open API Access for the Access Mode when creating the user. This mode generates the required AccessKey pair. For an existing user, you can add an AccessKey pair or you can obtain the AccessKey pair for that user. Note When created, the AccessKey secret is displayed only once. You must immediately save the AccessKey pair because the AccessKey pair is required for API calls. Add the AccessKey ID and secret to the ~/.alibabacloud/credentials file on your local computer. Alibaba Cloud automatically creates this file when you log in to the console. The Cloud Credential Operator (CCO) utility, ccoutil, uses these credentials when processing Credential Request objects. For example: [default] # Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key 1 access_key_secret = wYx56mszAN4Uunfh # Secret 1 Add your AccessKeyID and AccessKeySecret here. The RAM user must have the AdministratorAccess policy to ensure that the account has sufficient permission to create the OpenShift Container Platform cluster. This policy grants permissions to manage all Alibaba Cloud resources. When you attach the AdministratorAccess policy to a RAM user, you grant that user full access to all Alibaba Cloud services and resources. If you do not want to create a user with full access, create a custom policy with the following actions that you can add to your RAM user for installation. These actions are sufficient to install OpenShift Container Platform. Tip You can copy and paste the following JSON code into the Alibaba Cloud console to create a custom poicy. For information on creating custom policies, see Create a custom policy in the Alibaba Cloud documentation. Example 2.1. Example custom policy JSON file { "Version": "1", "Statement": [ { "Action": [ "tag:ListTagResources", "tag:UntagResources" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "vpc:DescribeVpcs", "vpc:DeleteVpc", "vpc:DescribeVSwitches", "vpc:DeleteVSwitch", "vpc:DescribeEipAddresses", "vpc:DescribeNatGateways", "vpc:ReleaseEipAddress", "vpc:DeleteNatGateway", "vpc:DescribeSnatTableEntries", "vpc:CreateSnatEntry", "vpc:AssociateEipAddress", "vpc:ListTagResources", "vpc:TagResources", "vpc:DescribeVSwitchAttributes", "vpc:CreateVSwitch", "vpc:CreateNatGateway", "vpc:DescribeRouteTableList", "vpc:CreateVpc", "vpc:AllocateEipAddress", "vpc:ListEnhanhcedNatGatewayAvailableZones" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "ecs:ModifyInstanceAttribute", "ecs:DescribeSecurityGroups", "ecs:DeleteSecurityGroup", "ecs:DescribeSecurityGroupReferences", "ecs:DescribeSecurityGroupAttribute", "ecs:RevokeSecurityGroup", "ecs:DescribeInstances", "ecs:DeleteInstances", "ecs:DescribeNetworkInterfaces", "ecs:DescribeInstanceRamRole", "ecs:DescribeUserData", "ecs:DescribeDisks", "ecs:ListTagResources", "ecs:AuthorizeSecurityGroup", "ecs:RunInstances", "ecs:TagResources", "ecs:ModifySecurityGroupPolicy", "ecs:CreateSecurityGroup", "ecs:DescribeAvailableResource", "ecs:DescribeRegions", "ecs:AttachInstanceRamRole" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "pvtz:DescribeRegions", "pvtz:DescribeZones", "pvtz:DeleteZone", "pvtz:DeleteZoneRecord", "pvtz:BindZoneVpc", "pvtz:DescribeZoneRecords", "pvtz:AddZoneRecord", "pvtz:SetZoneRecordStatus", "pvtz:DescribeZoneInfo", "pvtz:DescribeSyncEcsHostTask", "pvtz:AddZone" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "slb:DescribeLoadBalancers", "slb:SetLoadBalancerDeleteProtection", "slb:DeleteLoadBalancer", "slb:SetLoadBalancerModificationProtection", "slb:DescribeLoadBalancerAttribute", "slb:AddBackendServers", "slb:DescribeLoadBalancerTCPListenerAttribute", "slb:SetLoadBalancerTCPListenerAttribute", "slb:StartLoadBalancerListener", "slb:CreateLoadBalancerTCPListener", "slb:ListTagResources", "slb:TagResources", "slb:CreateLoadBalancer" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "ram:ListResourceGroups", "ram:DeleteResourceGroup", "ram:ListPolicyAttachments", "ram:DetachPolicy", "ram:GetResourceGroup", "ram:CreateResourceGroup", "ram:DeleteRole", "ram:GetPolicy", "ram:DeletePolicy", "ram:ListPoliciesForRole", "ram:CreateRole", "ram:AttachPolicyToRole", "ram:GetRole", "ram:CreatePolicy", "ram:CreateUser", "ram:DetachPolicyFromRole", "ram:CreatePolicyVersion", "ram:DetachPolicyFromUser", "ram:ListPoliciesForUser", "ram:AttachPolicyToUser", "ram:CreateUser", "ram:GetUser", "ram:DeleteUser", "ram:CreateAccessKey", "ram:ListAccessKeys", "ram:DeleteAccessKey", "ram:ListUsers", "ram:ListPolicyVersions" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "oss:DeleteBucket", "oss:DeleteBucketTagging", "oss:GetBucketTagging", "oss:GetBucketCors", "oss:GetBucketPolicy", "oss:GetBucketLifecycle", "oss:GetBucketReferer", "oss:GetBucketTransferAcceleration", "oss:GetBucketLog", "oss:GetBucketWebSite", "oss:GetBucketInfo", "oss:PutBucketTagging", "oss:PutBucket", "oss:OpenOssService", "oss:ListBuckets", "oss:GetService", "oss:PutBucketACL", "oss:GetBucketLogging", "oss:ListObjects", "oss:GetObject", "oss:PutObject", "oss:DeleteObject" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "alidns:DescribeDomainRecords", "alidns:DeleteDomainRecord", "alidns:DescribeDomains", "alidns:DescribeDomainRecordInfo", "alidns:AddDomainRecord", "alidns:SetDomainRecordStatus" ], "Resource": "*", "Effect": "Allow" }, { "Action": "bssapi:CreateInstance", "Resource": "*", "Effect": "Allow" }, { "Action": "ram:PassRole", "Resource": "*", "Effect": "Allow", "Condition": { "StringEquals": { "acs:Service": "ecs.aliyuncs.com" } } } ] } For more information about creating a RAM user and granting permissions, see Create a RAM user and Grant permissions to a RAM user in the Alibaba Cloud documentation. 2.2. Configuring the Cloud Credential Operator utility To assign RAM users and policies that provide long-term RAM AccessKeys (AKs) for each in-cluster component, extract and prepare the Cloud Credential Operator (CCO) utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials 2.3. steps Install a cluster on Alibaba Cloud infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on Alibaba Cloud : You can install a cluster quickly by using the default configuration options. Installing a customized cluster on Alibaba Cloud : The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . | [
"Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key 1 access_key_secret = wYx56mszAN4Uunfh # Secret",
"{ \"Version\": \"1\", \"Statement\": [ { \"Action\": [ \"tag:ListTagResources\", \"tag:UntagResources\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"vpc:DescribeVpcs\", \"vpc:DeleteVpc\", \"vpc:DescribeVSwitches\", \"vpc:DeleteVSwitch\", \"vpc:DescribeEipAddresses\", \"vpc:DescribeNatGateways\", \"vpc:ReleaseEipAddress\", \"vpc:DeleteNatGateway\", \"vpc:DescribeSnatTableEntries\", \"vpc:CreateSnatEntry\", \"vpc:AssociateEipAddress\", \"vpc:ListTagResources\", \"vpc:TagResources\", \"vpc:DescribeVSwitchAttributes\", \"vpc:CreateVSwitch\", \"vpc:CreateNatGateway\", \"vpc:DescribeRouteTableList\", \"vpc:CreateVpc\", \"vpc:AllocateEipAddress\", \"vpc:ListEnhanhcedNatGatewayAvailableZones\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ecs:ModifyInstanceAttribute\", \"ecs:DescribeSecurityGroups\", \"ecs:DeleteSecurityGroup\", \"ecs:DescribeSecurityGroupReferences\", \"ecs:DescribeSecurityGroupAttribute\", \"ecs:RevokeSecurityGroup\", \"ecs:DescribeInstances\", \"ecs:DeleteInstances\", \"ecs:DescribeNetworkInterfaces\", \"ecs:DescribeInstanceRamRole\", \"ecs:DescribeUserData\", \"ecs:DescribeDisks\", \"ecs:ListTagResources\", \"ecs:AuthorizeSecurityGroup\", \"ecs:RunInstances\", \"ecs:TagResources\", \"ecs:ModifySecurityGroupPolicy\", \"ecs:CreateSecurityGroup\", \"ecs:DescribeAvailableResource\", \"ecs:DescribeRegions\", \"ecs:AttachInstanceRamRole\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"pvtz:DescribeRegions\", \"pvtz:DescribeZones\", \"pvtz:DeleteZone\", \"pvtz:DeleteZoneRecord\", \"pvtz:BindZoneVpc\", \"pvtz:DescribeZoneRecords\", \"pvtz:AddZoneRecord\", \"pvtz:SetZoneRecordStatus\", \"pvtz:DescribeZoneInfo\", \"pvtz:DescribeSyncEcsHostTask\", \"pvtz:AddZone\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"slb:DescribeLoadBalancers\", \"slb:SetLoadBalancerDeleteProtection\", \"slb:DeleteLoadBalancer\", \"slb:SetLoadBalancerModificationProtection\", \"slb:DescribeLoadBalancerAttribute\", \"slb:AddBackendServers\", \"slb:DescribeLoadBalancerTCPListenerAttribute\", \"slb:SetLoadBalancerTCPListenerAttribute\", \"slb:StartLoadBalancerListener\", \"slb:CreateLoadBalancerTCPListener\", \"slb:ListTagResources\", \"slb:TagResources\", \"slb:CreateLoadBalancer\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ram:ListResourceGroups\", \"ram:DeleteResourceGroup\", \"ram:ListPolicyAttachments\", \"ram:DetachPolicy\", \"ram:GetResourceGroup\", \"ram:CreateResourceGroup\", \"ram:DeleteRole\", \"ram:GetPolicy\", \"ram:DeletePolicy\", \"ram:ListPoliciesForRole\", \"ram:CreateRole\", \"ram:AttachPolicyToRole\", \"ram:GetRole\", \"ram:CreatePolicy\", \"ram:CreateUser\", \"ram:DetachPolicyFromRole\", \"ram:CreatePolicyVersion\", \"ram:DetachPolicyFromUser\", \"ram:ListPoliciesForUser\", \"ram:AttachPolicyToUser\", \"ram:CreateUser\", \"ram:GetUser\", \"ram:DeleteUser\", \"ram:CreateAccessKey\", \"ram:ListAccessKeys\", \"ram:DeleteAccessKey\", \"ram:ListUsers\", \"ram:ListPolicyVersions\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"oss:DeleteBucket\", \"oss:DeleteBucketTagging\", \"oss:GetBucketTagging\", \"oss:GetBucketCors\", \"oss:GetBucketPolicy\", \"oss:GetBucketLifecycle\", \"oss:GetBucketReferer\", \"oss:GetBucketTransferAcceleration\", \"oss:GetBucketLog\", \"oss:GetBucketWebSite\", \"oss:GetBucketInfo\", \"oss:PutBucketTagging\", \"oss:PutBucket\", \"oss:OpenOssService\", \"oss:ListBuckets\", \"oss:GetService\", \"oss:PutBucketACL\", \"oss:GetBucketLogging\", \"oss:ListObjects\", \"oss:GetObject\", \"oss:PutObject\", \"oss:DeleteObject\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"alidns:DescribeDomainRecords\", \"alidns:DeleteDomainRecord\", \"alidns:DescribeDomains\", \"alidns:DescribeDomainRecordInfo\", \"alidns:AddDomainRecord\", \"alidns:SetDomainRecordStatus\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"bssapi:CreateInstance\", \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"ram:PassRole\", \"Resource\": \"*\", \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\": { \"acs:Service\": \"ecs.aliyuncs.com\" } } } ] }",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command."
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_alibaba/manually-creating-alibaba-ram |
Chapter 6. Email Notifications | Chapter 6. Email Notifications Email notifications are created by Satellite Server periodically or after completion of certain events. The periodic notifications can be sent daily, weekly or monthly. The events that trigger a notification are the following: Host build Content View promotion Error reported by host Repository sync Users do not receive any email notifications by default. An administrator can configure users to receive notifications based on criteria such as the type of notification, and frequency. Note If you want email notifications sent to a group's email address, instead of an individual's email address, create a user account with the group's email address and minimal Satellite permissions, then subscribe the user account to the desired notification types. Important Satellite Server does not enable outgoing emails by default, therefore you must review your email configuration. For more information, see Configuring Satellite Server for Outgoing Emails in Installing Satellite Server from a Connected Network . 6.1. Configuring Email Notifications You can configure Satellite to send email messages to individual users registered to Satellite. Satellite sends the email to the email address that has been added to the account, if present. Users can edit the email address by clicking on their name in the top-right of the Satellite web UI and selecting My account . Configure email notifications for a user from the Satellite web UI. Procedure In the Satellite web UI, navigate to Administer > Users . Click the Username of the user you want to edit. On the User tab, verify the value of the Mail field. Email notifications will be sent to the address in this field. On the Email Preferences tab, select Mail Enabled . Select the notifications you want the user to receive using the drop-down menus to the notification types. Note The Audit Summary notification can be filtered by entering the required query in the Mail Query text box. Click Submit . The user will start receiving the notification emails. 6.2. Testing Email Delivery To verify the delivery of emails, send a test email to a user. If the email gets delivered, the settings are correct. Procedure In the Satellite web UI, navigate to Administer > Users . Click on the username. On the Email Preferences tab, click Test email . A test email message is sent immediately to the user's email address. If the email is delivered, the verification is complete. Otherwise, you must perform the following diagnostic steps: Verify the user's email address. Verify Satellite Server's email configuration. Examine firewall and mail server logs. 6.3. Testing Email Notifications To verify that users are correctly subscribed to notifications, trigger the notifications manually. Procedure To trigger the notifications, execute the following command: Replace My_Frequency with one of the following: daily weekly monthly This triggers all notifications scheduled for the specified frequency for all the subscribed users. If every subscribed user receives the notifications, the verification succeeds. Note Sending manually triggered notifications to individual users is currently not supported. 6.4. Notification Types The following are the notifications created by Satellite: Audit summary : A summary of all activity audited by Satellite Server. Host built : A notification sent when a host is built. Host errata advisory : A summary of applicable and installable errata for hosts managed by the user. OpenSCAP policy summary : A summary of OpenSCAP policy reports and their results. Promote errata : A notification sent only after a Content View promotion. It contains a summary of errata applicable and installable to hosts registered to the promoted Content View. This allows a user to monitor what updates have been applied to which hosts. Puppet error state : A notification sent after a host reports an error related to Puppet. Puppet summary : A summary of Puppet reports. Sync errata : A notification sent only after synchronizing a repository. It contains a summary of new errata introduced by the synchronization. 6.5. Changing Email Notification Settings for a Host Satellite can send event notifications for a host to the host's registered owner. You can configure Satellite to send email notifications either to an individual user or a user group. When set to a user group, all group members who are subscribed to the email type receive a message. Receiving email notifications for a host can be useful, but also overwhelming if you are expecting to receive frequent errors, for example, because of a known issue or error you are working around. Procedure In the Satellite web UI, navigate to Hosts > All Hosts , locate the host that you want to view, and click Edit in the Actions column. Go to the Additional Information tab. If the checkbox Include this host within Satellite reporting is checked, then the email notifications are enabled on that host. Optional: Toggle the checkbox to enable or disable the email notifications. Note If you want to receive email notifications, ensure that you have an email address set in your user settings. | [
"foreman-rake reports:_My_Frequency_"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/administering_red_hat_satellite/email_notifications_admin |
4.68. gcc | 4.68. gcc 4.68.1. RHBA-2011:1644 - gcc bug fix and enhancement update Updated gcc packages that fix various bugs and add three enhancements are now available for Red Hat Enterprise Linux 6. The gcc packages include C, C++, Java, Fortran, Objective C, Objective C++, and Ada 95 GNU compilers, along with related support libraries. Bug Fixes BZ# 696352 The version of GCC incorrectly assumed that processors based on the AMD's multi-core architecture code named Bulldozer support the 3DNow! instruction set. This update adapts the underlying source code to make sure that GCC no longer uses the 3DNow! instructions on these processors. BZ# 705764 On the PowerPC architecture, GCC previously passed the V2DImode vector parameters using the stack and returned them in integer registers, which does not comply with the Application Binary Interface (ABI). This update corrects this error so that GCC now passes these parameters using the AltiVec parameter registers and returns them via the AltiVec return value register. BZ# 721376 Previously, GCC did not flush all pending register saves in a Frame Description Entry (FDE) before inline assembly instructions. This may have led to various problems when the inline assembly code modified those registers. With this update, GCC has been adapted to flush pending register saves in FDE before inline assembly instructions, resolving this issue. BZ# 732802 Prior to this update, the gcov test coverage utility sometimes incorrectly counted even opening brackets, which caused it to produce inaccurate statistics. This update applies a patch that corrects this error so that gcov ignores such brackets, as expected. BZ# 732807 When processing source code that extensively used overloading (that is, with hundreds or more overloads of the same function or method), the version of the C++ front end consumed a large amount of memory. This negatively affected the overall compile time and the amount of used system resources. With this update, the C++ front end has been optimized to use less resources in this scenario. Enhancements BZ# 696145 This update adds support for new "-mfsgsbase", "-mf16c", and "-mrdrnd" command line options, as well as corresponding intrinsics to the immintrin.h header file. This allows for reading FS and GS base registers, retrieving random data from the random data generator, and converting between floating point and half-precision floating-point types. BZ# 696370 GCC now supports AMD's generation processors. These processors can now be specified on the command line via the "-march=" and "-mtune=" command line options. BZ# 696495 GCC now supports Intel's generation processor instrinsics and instructions for reading the hardware random number generator. All users of gcc are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/gcc |
4.332. udisks | 4.332. udisks 4.332.1. RHBA-2011:1764 - udisks bug fix update An updated udisks package that fixes one bug is now available for Red Hat Enterprise Linux 6. The udisks daemon provides interfaces to obtain information and perform operations on storage devices. Bug Fix BZ# 738479 Prior to this update, the redundant udev watch rule interfered with the Logical Volume Manager (LVM) which could cause problems under certain workloads. This update removes this udev rule and udisks no longer interferes with LVM. All users of udisks are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/udisks |
Chapter 1. Preparing for a minor update | Chapter 1. Preparing for a minor update Keep your Red Hat OpenStack Platform (RHOSP) 17.0 environment updated with the latest packages and containers. Use the upgrade path for the following versions: Old RHOSP Version New RHOSP Version Red Hat OpenStack Platform 17.0.z Red Hat OpenStack Platform 17.0 latest Minor update workflow A minor update of your RHOSP environment involves updating the RPM packages and containers on the undercloud and overcloud host, and the service configuration, if needed. The data plane and control plane are fully available during the minor update. You must complete each of the following steps to update your RHOSP environment: Update step Description Undercloud update Director packages are updated, containers are replaced, and the undercloud is rebooted. Optional ovn-controller update All ovn-controller containers are updated in parallel on all Compute and Controller hosts. Overcloud update of Controller nodes and composable nodes that contain Pacemaker services Nodes are removed from the Pacemaker cluster. Then, the RPMs on the host, the container configuration data, and all the containers are updated. The host is re-added to the Pacemaker cluster. Overcloud update of Compute nodes Multiple nodes are updated in parallel. The default value for running nodes in parallel is 25. Overcloud update of Ceph nodes Ceph nodes are updated one node at a time. Ceph cluster update Ceph services are updated by using cephadm . The update occurs per daemon, beginning with CephMgr , CephMon , CephOSD , and then additional daemons. Note If you have a multistack infrastructure, update each overcloud stack completely, one at a time. If you have a distributed compute node (DCN) infrastructure, update the overcloud at the central location completely, and then update the overcloud at each edge site, one at a time. Additionally, an administrator can perform the following operations during a minor update: Migrate your virtual machine Create a virtual machine network Run additional cloud operations The following operations are not supported during a minor update: Replacing a Controller node Scaling in or scaling out any role Considerations before you update your RHOSP environment To help guide you during the update process, consider the following information: Red Hat recommends backing up the undercloud and overcloud control planes. For more information about backing up nodes, see Backing up and restoring the undercloud and control plane nodes . Familiarize yourself with the known issues that might block an update. Familiarize yourself with the possible update and upgrade paths before you begin your update. For more information, see Section 1.1, "Upgrade paths for long life releases" . To identify your current maintenance release, run USD cat /etc/rhosp-release . You can also run this command after updating your environment to validate the update. Known issues that might block an update There are currently no known issues. Procedure To prepare your RHOSP environment for the minor update, complete the following procedures: Section 1.2, "Locking the environment to a Red Hat Enterprise Linux release" Section 1.3, "Checking Red Hat Openstack Platform repositories" Section 1.4, "Updating the container image preparation file" Section 1.5, "Updating the SSL/TLS configuration" Section 1.6, "Disabling fencing in the overcloud" 1.1. Upgrade paths for long life releases Familiarize yourself with the possible update and upgrade paths before you begin an update. Note You can view your current RHOSP and RHEL versions in the /etc/rhosp-release and /etc/redhat-release files. Table 1.1. Updates version path Current version Target version RHOSP 10.0.x on RHEL 7.x RHOSP 10.0 latest on RHEL 7.7 latest RHOSP 13.0.x on RHEL 7.x RHOSP 13.0 latest on RHEL 7.9 latest RHOSP 16.1.x on RHEL 8.2 RHOSP 16.1 latest on RHEL 8.2 latest RHOSP 16.1.x on RHEL 8.2 RHOSP 16.2 latest on RHEL 8.4 latest RHOSP 16.2.x on RHEL 8.4 RHOSP 16.2 latest on RHEL 8.4 latest RHOSP 17.0.x on RHEL 9.0 RHOSP 17.0 latest on RHEL 9.0 latest Table 1.2. Upgrades version path Current version Target version RHOSP 10 on RHEL 7.7 RHOSP 13 latest on RHEL 7.9 latest RHOSP 13 on RHEL 7.9 RHOSP 16.1 latest on RHEL 8.2 latest RHOSP 13 on RHEL 7.9 RHOSP 16.2 latest on RHEL 8.4 latest RHOSP 16 on RHEL 8.4 RHOSP 17.1 latest on RHEL 9.0 latest Note In RHOSP 17.0, upgrades from versions are not supported. Upgrades will be supported in RHOSP 17.1. For more information about upgrading to versions earlier than 17.0, see the following guides: * Framework for Upgrades (13 to 16.2) * Framework for Upgrades (13 to 16.1) * Fast Forward Upgrades 13 1.2. Locking the environment to a Red Hat Enterprise Linux release Red Hat OpenStack Platform (RHOSP) 17.0 is supported on Red Hat Enterprise Linux (RHEL) 9.0. Before you perform the update, lock the undercloud and overcloud repositories to the RHEL 9.0 release to avoid upgrading the operating system to a newer minor release. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Edit your overcloud subscription management environment file, which is the file that contains the RhsmVars parameter. The default name for this file is usually rhsm.yml . Check if your subscription management configuration includes the rhsm_release parameter. If the rhsm_release parameter is not present, add it and set it to 9.0: Save the overcloud subscription management environment file. Create a playbook that contains a task to lock the operating system version to RHEL 9.0 on all nodes: Run the set_release.yaml playbook: USD ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml -f 25 ~/set_release.yaml --limit undercloud,Controller,Compute Replace <stack> with the name of your stack. Use the --limit option to apply the content to all RHOSP nodes. Do not run this playbook against Ceph Storage nodes because you might have a different subscription for these nodes. Note To manually lock a node to a version, log in to the node and run the subscription-manager release command: 1.3. Checking Red Hat Openstack Platform repositories Ensure that your repositories are using Red Hat OpenStack Platform (RHOSP) 17.0 packages. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Edit your overcloud subscription management environment file, which is the file that contains the RhsmVars parameter. The default name for this file is usually rhsm.yml . Check the rhsm_repos parameter in your subscription management configuration to ensure that the rhsm_repos parameter is using RHOSP 17.0 repositories: parameter_defaults: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms - openstack-17.0-for-rhel-9-x86_64-rpms - fast-datapath-for-rhel-9-x86_64-rpms Save the overcloud subscription management environment file. 1.4. Updating the container image preparation file The container preparation file is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the undercloud and overcloud. Before you update your environment, check the file to ensure that you obtain the correct image versions. Procedure Edit the container preparation file. The default name for this file is usually containers-prepare-parameter.yaml . Ensure that the tag parameter is set to 17.0 for each rule set: parameter_defaults: ContainerImagePrepare: - push_destination: true set: ... tag: '17.0' tag_from_label: '{version}-{release}' Note If you do not want to use a specific tag for the update, such as 17.0 or 17.0.1 , remove the tag key-value pair and specify tag_from_label only. This uses the installed Red Hat OpenStack Platform version to determine the value for the tag to use as part of the update process. Save this file. 1.5. Updating the SSL/TLS configuration Remove the NodeTLSData resource from the resource_registry to update your SSL/TLS configuration. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Edit your custom overcloud SSL/TLS public endpoint file, which is usually named ~/templates/enable-tls.yaml . Remove the NodeTLSData resource from the resource_registry : resource_registry: OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/tls-cert-inject.yaml ... The overcloud deployment uses a new service in HAProxy to determine if SSL/TLS is enabled. Note If this is the only resource in the resource_registry section of the enable-tls.yaml file, remove the complete resource_registry section. Save the SSL/TLS public endpoint file. 1.6. Disabling fencing in the overcloud Before you update the overcloud, ensure that fencing is disabled. If fencing is deployed in your environment during the Controller nodes update process, the overcloud might detect certain nodes as disabled and attempt fencing operations, which can cause unintended results. If you have enabled fencing in the overcloud, you must temporarily disable fencing for the duration of the update. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Log in to a Controller node and run the Pacemaker command to disable fencing: USD ssh tripleo-admin@<controller_ip> "sudo pcs property set stonith-enabled=false" Replace <controller_ip> with the IP address of a Controller node. You can find the IP addresses of your Controller nodes with the metalsmith list command. In the fencing.yaml environment file, set the EnableFencing parameter to false to ensure that fencing stays disabled during the update process. Additional Resources Fencing Controller nodes with STONITH | [
"source ~/stackrc",
"parameter_defaults: RhsmVars: ... rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_pool_ids: \"1a85f9223e3d5e43013e3d6e8ff506fd\" rhsm_method: \"portal\" rhsm_release: \"9.0\"",
"cat > ~/set_release.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: set release to 9.0 environment: SMDEV_CONTAINER_OFF: True command: subscription-manager release --set=9.0 become: true EOF",
"ansible-playbook -i ~/overcloud-deploy/<stack>/tripleo-ansible-inventory.yaml -f 25 ~/set_release.yaml --limit undercloud,Controller,Compute",
"sudo SMDEV_CONTAINER_OFF=True subscription-manager release --set=9.0",
"source ~/stackrc",
"parameter_defaults: RhsmVars: rhsm_repos: - rhel-9-for-x86_64-baseos-eus-rpms - rhel-9-for-x86_64-appstream-eus-rpms - rhel-9-for-x86_64-highavailability-eus-rpms - openstack-17.0-for-rhel-9-x86_64-rpms - fast-datapath-for-rhel-9-x86_64-rpms",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: tag: '17.0' tag_from_label: '{version}-{release}'",
"source ~/stackrc",
"resource_registry: OS::TripleO::NodeTLSData: /usr/share/openstack-tripleo-heat-templates/puppet/extraconfig/tls/tls-cert-inject.yaml",
"source ~/stackrc",
"ssh tripleo-admin@<controller_ip> \"sudo pcs property set stonith-enabled=false\""
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/keeping_red_hat_openstack_platform_updated/assembly_preparing-for-a-minor-update_keeping-updated |
Preface | Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). The Red Hat build of OpenJDK is available in three versions: 8u, 11u, and 17u. Packages for the Red Hat build of OpenJDK are made available on Red Hat Enterprise Linux and Microsoft Windows and shipped as a JDK and JRE in the Red Hat Ecosystem Catalog. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.4/pr01 |
Chapter 4. Adding physical machines as bare-metal nodes | Chapter 4. Adding physical machines as bare-metal nodes Use one of the following methods to enroll a bare-metal node: Prepare an inventory file with the node details, import the file into the Bare Metal Provisioning service, and make the nodes available. Register a physical machine as a bare-metal node, and then manually add its hardware details and create ports for each of its Ethernet MAC addresses. 4.1. Prerequisites The RHOSO environment includes the Bare Metal Provisioning service. For more information, see Enabling the Bare Metal Provisioning service (ironic) . You are logged on to a workstation that has access to the RHOCP cluster, as a user with cluster-admin privileges. The oc command line tool is installed on the workstation. 4.2. Enrolling bare-metal nodes with an inventory file You can create an inventory file that defines the details of each bare-metal node. You import the file into the Bare Metal Provisioning service (ironic) to enroll the bare-metal nodes, and then make each node available. Note Some drivers might require specific configuration. For more information, see Bare metal drivers . Procedure Create an inventory file to define the details of each node, for example, ironic-nodes.yaml . For each node, define the node name and the address and credentials for the bare-metal driver. For details on the available properties for your enabled driver, see Bare metal drivers . Replace <node> with the name of the node. Replace <driver> with a supported bare-metal driver, for example, redfish . Replace <ip> with the IP address of the Bare Metal controller. Replace <user> with your username. Replace <password> with your password. Optional: Replace <property> with a driver property that you want to configure, and replace <value> with the value of the property. For information on the available properties, see Bare metal drivers . Define the node properties and ports: Replace <cpu_count> with the number of CPUs. Replace <cpu_arch> with the type of architecture of the CPUs. Replace <memory> with the amount of memory in MiB. Replace <root_disk> with the size of the root disk in GiB. Only required when the machine has multiple disks. Replace <serial> with the serial number of the disk that you want to use for deployment. Optional: Include the network_interface property if you want to override the default network type of flat . You can change the network type to one of the following valid values: neutron : Use to provide tenant-defined networking through the Networking service, where tenant networks are separated from each other and from the provisioning and cleaning provider networks. Required to create a provisioning network with IPv6. noop : Use for standalone deployments where network switching is not required. Replace <mac_address> with the MAC address of the NIC used to PXE boot. Access the remote shell for the OpenStackClient pod from your workstation: Import the inventory file into the Bare Metal Provisioning service: The nodes are now in the enroll state. Wait for the extra network interface port configuration data to populate the Networking service (neutron). This process takes at least 60 seconds. Set the provisioning state of each node to available : The Bare Metal Provisioning service cleans the node if you enabled node cleaning. Check that the nodes are enrolled: There might be a delay between enrolling a node and its state being shown. Exit the openstackclient pod: 4.3. Enrolling a bare-metal node manually Register a physical machine as a bare-metal node, then manually add its hardware details and create ports for each of its Ethernet MAC addresses. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Add a new node: Replace <driver_name> with the name of the driver, for example, redfish . Replace <node_name> with the name of your new bare-metal node. Note the UUID assigned to the node when it is created. Update the node properties to match the hardware specifications on the node: Replace <node> with the ID of the bare metal node. Replace <cpu> with the number of CPUs. Replace <ram> with the RAM in MB. Replace <disk> with the disk size in GB. Replace <arch> with the architecture type. Optional: Set the network_interface property to override the default network type of flat : Replace <network_interface> with one of the following valid network types: neutron : Use to provide tenant-defined networking through the Networking service, where tenant networks are separated from each other and from the provisioning and cleaning provider networks. Required to create a provisioning network with IPv6. noop : Use for standalone deployments where network switching is not required. Optional: If you have multiple disks, set the root device hints to inform the deploy ramdisk which disk to use for deployment: Replace <node> with the ID of the bare metal node. Replace <property> and <value> with details about the disk that you want to use for deployment, for example root_device='{"size": "128"}' RHOSP supports the following properties: model (String): Device identifier. vendor (String): Device vendor. serial (String): Disk serial number. hctl (String): Host:Channel:Target:Lun for SCSI. size (Integer): Size of the device in GB. wwn (String): Unique storage identifier. wwn_with_extension (String): Unique storage identifier with the vendor extension appended. wwn_vendor_extension (String): Unique vendor storage identifier. rotational (Boolean): True for a rotational device (HDD), otherwise false (SSD). name (String): The name of the device, for example: /dev/sdb1 Use this property only for devices with persistent names. Note If you specify more than one property, the device must match all of those properties. Inform the Bare Metal Provisioning service of the node network card by creating a port with the MAC address of the NIC on the provisioning network: Replace <node> with the unique ID of the bare metal node. Replace <mac_address> with the MAC address of the NIC used to PXE boot. Validate the configuration of the node: The validation output Result indicates the following: False : The interface has failed validation. If the reason provided includes missing the instance_info parameters [\'ramdisk', \'kernel', and \'image_source'] , this might be because the Compute service populates those missing parameters at the beginning of the deployment process, therefore they have not been set at this point. If you are using a whole disk image, then you might need to only set image_source to pass the validation. True : The interface has passed validation. None : The interface is not supported for your driver. Exit the openstackclient pod: 4.4. Deploying a bare-metal node with Redfish virtual media boot You can use Redfish virtual media boot to supply a boot image to the Baseboard Management Controller (BMC) of a node so that the BMC can insert the image into one of the virtual drives. The node can then boot from the virtual drive into the operating system that exists in the image. Redfish hardware types support booting deploy, rescue, and user images over virtual media. The Bare Metal Provisioning service (ironic) uses kernel and ramdisk images associated with a node to build bootable ISO images for UEFI or BIOS boot modes at the moment of node deployment. The major advantage of virtual media boot is that you can eliminate the TFTP image transfer phase of PXE and use HTTP GET, or other methods, instead. To launch bare-metal instances with the redfish hardware type over virtual media, set the boot interface of each bare-metal node to redfish-virtual-media and, for UEFI nodes, define the EFI System Partition (ESP) image. Then configure an enrolled node to use Redfish virtual media boot. Prerequisites The bare-metal node is registered and enrolled. The IPA and instance images are available in the Image Service (glance). For UEFI nodes, an EFI system partition image (ESP) is available in the Image Service (glance). Procedure Access the remote shell for the OpenStackClient pod from your workstation: Set the Bare Metal service boot interface to redfish-virtual-media : Replace <node_name> with the name of the node. For UEFI nodes, define the EFI System Partition (ESP) image: Replace <esp_image> with the image UUID or URL for the ESP image. Replace <node> with the name of the node. Note For BIOS nodes, do not complete this step. Create a port on the bare-metal node and associate the port with the MAC address of the NIC on the bare metal node: Replace <node_uuid> with the UUID of the bare-metal node. Replace <mac_address> with the MAC address of the NIC on the bare-metal node. Exit the openstackclient pod: 4.5. Creating flavors for launching bare-metal instances You must create flavors that your cloud users can use to request bare-metal instances. You can specify which bare-metal nodes should be used for bare-metal instances launched with a particular flavor by using a resource class. You can tag bare-metal nodes with resource classes that identify the hardware resources on the node, for example, GPUs. The cloud user can select a flavor with the GPU resource class to create an instance for a vGPU workload. The Compute scheduler uses the resource class to identify suitable host bare-metal nodes for instances. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Retrieve a list of your nodes to identify their UUIDs: Tag each bare-metal node with a custom bare-metal resource class: Replace <CUSTOM> with a string that identifies the purpose of the resource class. For example, set to GPU to create a custom GPU resource class that you can use to tag bare metal nodes that you want to designate for GPU workloads. Replace <node> with the ID of the bare metal node. Create a flavor for bare-metal instances: Replace <ram_size_mb> with the RAM of the bare metal node, in MB. Replace <disk_size_gb> with the size of the disk on the bare metal node, in GB. Replace <no_vcpus> with the number of CPUs on the bare metal node. Note These properties are not used for scheduling instances. However, the Compute scheduler does use the disk size to determine the root partition size. Associate the flavor for bare-metal instances with the custom resource class: To determine the name of a custom resource class that corresponds to a resource class of a bare-metal node, convert the resource class to uppercase, replace each punctuation mark with an underscore, and prefix with CUSTOM_ . Note A flavor can request only one instance of a bare-metal resource class. Set the following flavor properties to prevent the Compute scheduler from using the bare-metal flavor properties to schedule instances: Verify that the new flavor has the correct values: Exit the openstackclient pod: 4.6. Bare-metal node provisioning states A bare-metal node transitions through several provisioning states during its lifetime. API requests and conductor events performed on the node initiate the transitions. There are two categories of provisioning states: "stable" and "in transition". Use the following table to understand the node provisioning states and the actions you can perform to transition a node from one state to another. Table 4.1. Provisioning states State Category Description enroll Stable The initial state of each node. For information on enrolling a node, see Adding physical machines as bare metal nodes . verifying In transition The Bare Metal Provisioning service validates that it can manage the node by using the driver_info configuration provided during the node enrollment. manageable Stable The node is transitioned to the manageable state when the Bare Metal Provisioning service has verified that it can manage the node. You can transition the node from the manageable state to one of the following states by using the following commands: openstack baremetal node adopt adopting active openstack baremetal node provide cleaning available openstack baremetal node clean cleaning available openstack baremetal node inspect inspecting manageable You must move a node to the manageable state after it is transitioned to one of the following failed states: adopt failed clean failed inspect failed Move a node into the manageable state when you need to update the node. inspecting In transition The Bare Metal Provisioning service uses node introspection to update the hardware-derived node properties to reflect the current state of the hardware. The node transitions to manageable for synchronous inspection, and inspect wait for asynchronous inspection. The node transitions to inspect failed if an error occurs. inspect wait In transition The provision state that indicates that an asynchronous inspection is in progress. If the node inspection is successful, the node transitions to the manageable state. inspect failed Stable The provisioning state that indicates that the node inspection failed. You can transition the node from the inspect failed state to one of the following states by using the following commands: openstack baremetal node inspect inspecting manageable openstack baremetal node manage manageable cleaning In transition Nodes in the cleaning state are being scrubbed and reprogrammed into a known configuration. When a node is in the cleaning state, depending on the network management, the conductor performs the following tasks: Out-of-band: The conductor performs the clean step. In-band: The conductor prepares the environment to boot the ramdisk for running the in-band clean steps. The preparation tasks include building the PXE configuration files, and configuring the DHCP. clean wait In transition Nodes in the clean wait state are being scrubbed and reprogrammed into a known configuration. This state is similar to the cleaning state except that in the clean wait state, the conductor is waiting for the ramdisk to boot or the clean step to finish. You can interrupt the cleaning process of a node in the clean wait state by running openstack baremetal node abort . available Stable After nodes have been successfully preconfigured and cleaned, they are moved into the available state and are ready to be provisioned. You can transition the node from the available state to one of the following states by using the following commands: openstack baremetal node deploy deploying active openstack baremetal node manage manageable deploying In transition Nodes in the deploying state are being prepared for a workload, which involves performing the following tasks: Setting appropriate BIOS options for the node deployment. Partitioning drives and creating file systems. Creating any additional resources that might be required by additional subsystems, such as the node-specific network configuration, and a configuraton drive partition. wait call-back In transition Nodes in the wait call-back state are being prepared for a workload. This state is similar to the deploying state except that in the wait call-back state, the conductor is waiting for a task to complete before preparing the node. For example, the following tasks must be completed before the conductor can prepare the node: The ramdisk has booted. The bootloader is installed. The image is written to the disk. You can interrupt the deployment of a node in the wait call-back state by running openstack baremetal node delete or openstack baremetal node undeploy . deploy failed Stable The provisioning state that indicates that the node deployment failed. You can transition the node from the deploy failed state to one of the following states by using the following commands: openstack baremetal node deploy deploying active openstack baremetal node rebuild deploying active openstack baremetal node delete deleting cleaning clean wait cleaning available openstack baremetal node undeploy deleting cleaning clean wait cleaning available active Stable Nodes in the active state have a workload running on them. The Bare Metal Provisioning service might regularly collect out-of-band sensor information, including the power state. You can transition the node from the active state to one of the following states by using the following commands: openstack baremetal node delete deleting available openstack baremetal node undeploy cleaning available openstack baremetal node rebuild deploying active openstack baremetal node rescue rescuing rescue deleting In transition When a node is in the deleting state, the Bare Metal Provisioning service disassembles the active workload and removes any configuration and resources it added to the node during the node deployment or rescue. Nodes transition quickly from the deleting state to the cleaning state, and then to the clean wait state. error Stable If a node deletion is unsuccessful, the node is moved into the error state. You can transition the node from the error state to one of the following states by using the following commands: openstack baremetal node delete deleting available openstack baremetal node undeploy cleaning available adopting In transition You can use the openstack baremetal node adopt command to transition a node with an existing workload directly from manageable to active state without first cleaning and deploying the node. When a node is in the adopting state the Bare Metal Provisioning service has taken over management of the node with its existing workload. rescuing In transition Nodes in the rescuing state are being prepared to perform the following rescue operations: Setting appropriate BIOS options for the node deployment. Creating any additional resources that might be required by additional subsystems, such as node-specific network configurations. rescue wait In transition Nodes in the rescue wait state are being rescued. This state is similar to the rescuing state except that in the rescue wait state, the conductor is waiting for the ramdisk to boot, or to execute parts of the rescue which need to run in-band on the node, such as setting the password for user named rescue. You can interrupt the rescue operation of a node in the rescue wait state by running openstack baremetal node abort . rescue failed Stable The provisioning state that indicates that the node rescue failed. You can transition the node from the rescue failed state to one of the following states by using the following commands: openstack baremetal node rescue rescuing rescue openstack baremetal node unrescue unrescuing active openstack baremetal node delete deleting available rescue Stable Nodes in the rescue state are running a rescue ramdisk. The Bare Metal Provisioning service might regularly collect out-of-band sensor information, including the power state. You can transition the node from the rescue state to one of the following states by using the following commands: openstack baremetal node unrescue unrescuing active openstack baremetal node delete deleting available unrescuing In transition Nodes in the unrescuing state are being prepared to transition from the rescue state to the active state. unrescue failed Stable The provisioning state that indicates that the node unrescue operation failed. You can transition the node from the unrescue failed state to one of the following states by using the following commands: openstack baremetal node rescue rescuing rescue openstack baremetal node unrescue unrescuing active openstack baremetal node delete deleting available | [
"nodes: - name: <node> driver: <driver> driver_info: <driver>_address: <ip> <driver>_username: <user> <driver>_password: <password> [<property>: <value>]",
"nodes: - name: <node> properties: cpus: <cpu_count> cpu_arch: <cpu_arch> memory_mb: <memory> local_gb: <root_disk> root_device: serial: <serial> network_interface: <interface_type> ports: - address: <mac_address>",
"oc rsh -n openstack openstackclient",
"openstack baremetal create ironic-nodes.yaml",
"openstack baremetal node manage <node> openstack baremetal node provide <node>",
"openstack baremetal node list",
"exit",
"oc rsh -n openstack openstackclient",
"openstack baremetal node create --driver <driver_name> --name <node_name>",
"openstack baremetal node set <node> --property cpus=<cpu> --property memory_mb=<ram> --property local_gb=<disk> --property cpu_arch=<arch>",
"openstack baremetal node set <node> --network-interace <network_interface>",
"openstack baremetal node set <node> --property root_device='{\"<property>\": \"<value>\"}'",
"openstack baremetal port create --node <node_uuid> <mac_address>",
"openstack baremetal node validate <node> +------------+--------+---------------------------------------------+ | Interface | Result | Reason | +------------+--------+---------------------------------------------+ | boot | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | console | None | not supported | | deploy | False | Cannot validate image information for node | | | | a02178db-1550-4244-a2b7-d7035c743a9b | | | | because one or more parameters are missing | | | | from its instance_info. Missing are: | | | | ['ramdisk', 'kernel', 'image_source'] | | inspect | None | not supported | | management | True | | | network | True | | | power | True | | | raid | True | | | storage | True | | +------------+--------+---------------------------------------------+",
"exit",
"oc rsh -n openstack openstackclient",
"openstack baremetal node set --boot-interface redfish-virtual-media <node_name>",
"openstack baremetal node set --driver-info bootloader=<esp_image> <node>",
"openstack baremetal port create --pxe-enabled True --node <node_uuid> <mac_address>",
"exit",
"oc rsh -n openstack openstackclient",
"openstack baremetal node list",
"openstack baremetal node set --resource-class baremetal.<CUSTOM> <node>",
"openstack flavor create --id auto --ram <ram_size_mb> --disk <disk_size_gb> --vcpus <no_vcpus> baremetal",
"openstack flavor set --property resources:CUSTOM_BAREMETAL_<CUSTOM>=1 baremetal",
"openstack flavor set --property resources:VCPU=0 --property resources:MEMORY_MB=0 --property resources:DISK_GB=0 baremetal",
"openstack flavor list",
"exit"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_bare_metal_provisioning_service/assembly_adding-physical-machines-as-bare-metal-nodes |
Chapter 1. Support policy for Eclipse Temurin | Chapter 1. Support policy for Eclipse Temurin Red Hat will support select major versions of Eclipse Temurin in its products. For consistency, these versions remain similar to Oracle JDK versions that Oracle designates as long-term support (LTS). A major version of Eclipse Temurin will be supported for a minimum of six years from the time that version is first introduced. For more information, see the Eclipse Temurin Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Eclipse Temurin does not support RHEL 6 as a supported configuration. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.10/rn-openjdk-temurin-support-policy |
2.3.4. Inherently Insecure Services | 2.3.4. Inherently Insecure Services Even the most vigilant organization can fall victim to vulnerabilities if the network services they choose are inherently insecure. For instance, there are many services developed under the assumption that they are used over trusted networks; however, this assumption fails as soon as the service becomes available over the Internet - which is itself inherently untrusted. One category of insecure network services are those that require unencrypted usernames and passwords for authentication. Telnet and FTP are two such services. If packet sniffing software is monitoring traffic between the remote user and such a service usernames and passwords can be easily intercepted. Inherently, such services can also more easily fall prey to what the security industry terms the man-in-the-middle attack. In this type of attack, a cracker redirects network traffic by tricking a cracked name server on the network to point to his machine instead of the intended server. Once someone opens a remote session to the server, the attacker's machine acts as an invisible conduit, sitting quietly between the remote service and the unsuspecting user capturing information. In this way a cracker can gather administrative passwords and raw data without the server or the user realizing it. Another category of insecure services include network file systems and information services such as NFS or NIS, which are developed explicitly for LAN usage but are, unfortunately, extended to include WANs (for remote users). NFS does not, by default, have any authentication or security mechanisms configured to prevent a cracker from mounting the NFS share and accessing anything contained therein. NIS, as well, has vital information that must be known by every computer on a network, including passwords and file permissions, within a plain text ACSII or DBM (ASCII-derived) database. A cracker who gains access to this database can then access every user account on a network, including the administrator's account. By default, Red Hat Enterprise Linux is released with all such services turned off. However, since administrators often find themselves forced to use these services, careful configuration is critical. Refer to Chapter 5, Server Security for more information about setting up services in a safe manner. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-risk-serv-insecure |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_multiple_openshift_data_foundation_storage_clusters/providing-feedback-on-red-hat-documentation_rhodf |
multicluster engine operator with Red Hat Advanced Cluster Management | multicluster engine operator with Red Hat Advanced Cluster Management Red Hat Advanced Cluster Management for Kubernetes 2.12 multicluster engine operator with Red Hat Advanced Cluster Management integration | [
"curl -L https://raw.githubusercontent.com/open-cluster-management-io/clusteradm/main/install.sh | bash",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: addonDeploymentConfig metadata: name: addon-ns-config namespace: multicluster-engine spec: agentInstallNamespace: open-cluster-management-agent-addon-discovery",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ClusterManagementAddOn metadata: name: work-manager spec: addonMeta: displayName: work-manager installStrategy: placements: - name: global namespace: open-cluster-management-global-set rolloutStrategy: type: All type: Placements",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ClusterManagementAddOn metadata: name: work-manager spec: addonMeta: displayName: work-manager installStrategy: placements: - name: global namespace: open-cluster-management-global-set rolloutStrategy: type: All configs: - group: addon.open-cluster-management.io name: addon-ns-config namespace: multicluster-engine resource: addondeploymentconfigs type: Placements",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ClusterManagementAddOn metadata: name: managed-serviceaccount spec: addonMeta: displayName: managed-serviceaccount installStrategy: placements: - name: global namespace: open-cluster-management-global-set rolloutStrategy: type: All configs: - group: addon.open-cluster-management.io name: addon-ns-config namespace: multicluster-engine resource: addondeploymentconfigs type: Placements",
"apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ClusterManagementAddOn metadata: name: cluster-proxy spec: addonMeta: displayName: cluster-proxy installStrategy: placements: - name: global namespace: open-cluster-management-global-set rolloutStrategy: type: All configs: - group: addon.open-cluster-management.io name: addon-ns-config namespace: multicluster-engine resource: addondeploymentconfigs type: Placements",
"get deployment -n open-cluster-management-agent-addon-discovery",
"NAME READY UP-TO-DATE AVAILABLE AGE cluster-proxy-proxy-agent 1/1 1 1 24h klusterlet-addon-workmgr 1/1 1 1 24h managed-serviceaccount-addon-agent 1/1 1 1 24h",
"kind: KlusterletConfig apiVersion: config.open-cluster-management.io/v1alpha1 metadata: name: mce-import-klusterlet-config spec: installMode: type: noOperator noOperator: postfix: mce-import",
"label addondeploymentconfig addon-ns-config -n multicluster-engine cluster.open-cluster-management.io/backup=true",
"label addondeploymentconfig hypershift-addon-deploy-config -n multicluster-engine cluster.open-cluster-management.io/backup=true",
"label clustermanagementaddon work-manager cluster.open-cluster-management.io/backup=true",
"label clustermanagementaddon cluster-proxy cluster.open-cluster-management.io/backup=true",
"label clustermanagementaddon managed-serviceaccount cluster.open-cluster-management.io/backup=true",
"label KlusterletConfig mce-import-klusterlet-config cluster.open-cluster-management.io/backup=true",
"apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: annotations: agent.open-cluster-management.io/klusterlet-config: mce-import-klusterlet-config 1 labels: cloud: auto-detect vendor: auto-detect name: mce-a 2 spec: hubAcceptsClient: true leaseDurationSeconds: 60",
"get managedcluster",
"NAME HUB ACCEPTED MANAGED CLUSTER URLS JOINED AVAILABLE AGE local-cluster true https://<api.acm-hub.com:port> True True 44h mce-a true https://<api.mce-a.com:port> True True 27s",
"patch addondeploymentconfig hypershift-addon-deploy-config -n multicluster-engine --type=merge -p '{\"spec\":{\"agentInstallNamespace\":\"open-cluster-management-agent-addon-discovery\"}}'",
"patch addondeploymentconfig hypershift-addon-deploy-config -n multicluster-engine --type=merge -p '{\"spec\":{\"customizedVariables\":[{\"name\":\"disableMetrics\",\"value\": \"true\"},{\"name\":\"disableHOManagement\",\"value\": \"true\"}]}}'",
"clusteradm addon enable --names hypershift-addon --clusters <managed-cluster-names>",
"get managedcluster",
"get deployment -n open-cluster-management-agent-addon-discovery",
"NAME READY UP-TO-DATE AVAILABLE AGE cluster-proxy-proxy-agent 1/1 1 1 24h klusterlet-addon-workmgr 1/1 1 1 24h hypershift-addon-agent 1/1 1 1 24h managed-serviceaccount-addon-agent 1/1 1 1 24h",
"apiVersion: discovery.open-cluster-management.io/v1 kind: DiscoveredCluster metadata: creationTimestamp: \"2024-05-30T23:05:39Z\" generation: 1 labels: hypershift.open-cluster-management.io/hc-name: hosted-cluster-1 hypershift.open-cluster-management.io/hc-namespace: clusters name: hosted-cluster-1 namespace: mce-1 resourceVersion: \"1740725\" uid: b4c36dca-a0c4-49f9-9673-f561e601d837 spec: apiUrl: https://a43e6fe6dcef244f8b72c30426fb6ae3-ea3fec7b113c88da.elb.us-west-1.amazonaws.com:6443 cloudProvider: aws creationTimestamp: \"2024-05-30T23:02:45Z\" credential: {} displayName: mce-1-hosted-cluster-1 importAsManagedCluster: false isManagedCluster: false name: hosted-cluster-1 openshiftVersion: 0.0.0 status: Active type: MultiClusterEngineHCP",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-mce-hcp-autoimport namespace: open-cluster-management-global-set annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/description: Discovered clusters that are of type MultiClusterEngineHCP can be automatically imported into ACM as managed clusters. This policy configure those discovered clusters so they are automatically imported. Fine tuning MultiClusterEngineHCP clusters to be automatically imported can be done by configure filters at the configMap or add annotation to the discoverd cluster. spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: mce-hcp-autoimport-config spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: discovery-config namespace: open-cluster-management-global-set data: rosa-filter: \"\" remediationAction: enforce 1 severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-mce-hcp-autoimport spec: remediationAction: enforce severity: low object-templates-raw: | {{- /* find the MultiClusterEngineHCP DiscoveredClusters */ -}} {{- range USDdc := (lookup \"discovery.open-cluster-management.io/v1\" \"DiscoveredCluster\" \"\" \"\").items }} {{- /* Check for the flag that indicates the import should be skipped */ -}} {{- USDskip := \"false\" -}} {{- range USDkey, USDvalue := USDdc.metadata.annotations }} {{- if and (eq USDkey \"discovery.open-cluster-management.io/previously-auto-imported\") (eq USDvalue \"true\") }} {{- USDskip = \"true\" }} {{- end }} {{- end }} {{- /* if the type is MultiClusterEngineHCP and the status is Active */ -}} {{- if and (eq USDdc.spec.status \"Active\") (contains (fromConfigMap \"open-cluster-management-global-set\" \"discovery-config\" \"mce-hcp-filter\") USDdc.spec.displayName) (eq USDdc.spec.type \"MultiClusterEngineHCP\") (eq USDskip \"false\") }} - complianceType: musthave objectDefinition: apiVersion: discovery.open-cluster-management.io/v1 kind: DiscoveredCluster metadata: name: {{ USDdc.metadata.name }} namespace: {{ USDdc.metadata.namespace }} spec: importAsManagedCluster: true 2 {{- end }} {{- end }}",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: policy-mce-hcp-autoimport-placement namespace: open-cluster-management-global-set spec: tolerations: - key: cluster.open-cluster-management.io/unreachable operator: Exists - key: cluster.open-cluster-management.io/unavailable operator: Exists clusterSets: - global predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: local-cluster operator: In values: - \"true\"",
"apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: policy-mce-hcp-autoimport-placement-binding namespace: open-cluster-management-global-set placementRef: name: policy-mce-hcp-autoimport-placement apiGroup: cluster.open-cluster-management.io kind: Placement subjects: - name: policy-mce-hcp-autoimport apiGroup: policy.open-cluster-management.io kind: Policy",
"get policies.policy.open-cluster-management.io policy-mce-hcp-autoimport -n <namespace>",
"annotations: discovery.open-cluster-management.io/previously-auto-imported: \"true\"",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-rosa-autoimport annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/description: OpenShift Service on AWS discovered clusters can be automatically imported into Red Hat Advanced Cluster Management as managed clusters with this policy. You can select and configure those managed clusters so you can import. Configure filters or add an annotation if you do not want all of your OpenShift Service on AWS clusters to be automatically imported. spec: remediationAction: inform 1 disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: rosa-autoimport-config spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: discovery-config namespace: open-cluster-management-global-set data: rosa-filter: \"\" 2 remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-rosa-autoimport spec: remediationAction: enforce severity: low object-templates-raw: | {{- /* find the ROSA DiscoveredClusters */ -}} {{- range USDdc := (lookup \"discovery.open-cluster-management.io/v1\" \"DiscoveredCluster\" \"\" \"\").items }} {{- /* Check for the flag that indicates the import should be skipped */ -}} {{- USDskip := \"false\" -}} {{- range USDkey, USDvalue := USDdc.metadata.annotations }} {{- if and (eq USDkey \"discovery.open-cluster-management.io/previously-auto-imported\") (eq USDvalue \"true\") }} {{- USDskip = \"true\" }} {{- end }} {{- end }} {{- /* if the type is ROSA and the status is Active */ -}} {{- if and (eq USDdc.spec.status \"Active\") (contains (fromConfigMap \"open-cluster-management-global-set\" \"discovery-config\" \"rosa-filter\") USDdc.spec.displayName) (eq USDdc.spec.type \"ROSA\") (eq USDskip \"false\") }} - complianceType: musthave objectDefinition: apiVersion: discovery.open-cluster-management.io/v1 kind: DiscoveredCluster metadata: name: {{ USDdc.metadata.name }} namespace: {{ USDdc.metadata.namespace }} spec: importAsManagedCluster: true {{- end }} {{- end }} - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-rosa-managedcluster-status spec: remediationAction: enforce severity: low object-templates-raw: | {{- /* Use the same DiscoveredCluster list to check ManagedCluster status */ -}} {{- range USDdc := (lookup \"discovery.open-cluster-management.io/v1\" \"DiscoveredCluster\" \"\" \"\").items }} {{- /* Check for the flag that indicates the import should be skipped */ -}} {{- USDskip := \"false\" -}} {{- range USDkey, USDvalue := USDdc.metadata.annotations }} {{- if and (eq USDkey \"discovery.open-cluster-management.io/previously-auto-imported\") (eq USDvalue \"true\") }} {{- USDskip = \"true\" }} {{- end }} {{- end }} {{- /* if the type is ROSA and the status is Active */ -}} {{- if and (eq USDdc.spec.status \"Active\") (contains (fromConfigMap \"open-cluster-management-global-set\" \"discovery-config\" \"rosa-filter\") USDdc.spec.displayName) (eq USDdc.spec.type \"ROSA\") (eq USDskip \"false\") }} - complianceType: musthave objectDefinition: apiVersion: cluster.open-cluster-management.io/v1 kind: ManagedCluster metadata: name: {{ USDdc.spec.displayName }} namespace: {{ USDdc.spec.displayName }} status: conditions: - type: ManagedClusterConditionAvailable status: \"True\" {{- end }} {{- end }}",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: Placement metadata: name: placement-openshift-plus-hub spec: predicates: - requiredClusterSelector: labelSelector: matchExpressions: - key: name operator: In values: - local-cluster",
"apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-rosa-autoimport placementRef: apiGroup: cluster.open-cluster-management.io kind: Placement name: placement-policy-rosa-autoimport subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-rosa-autoimport",
"get policies.policy.open-cluster-management.io policy-rosa-autoimport -n <namespace>",
"{{ if .Spec.Proxy }} proxy: {{ .Spec.Proxy | toYaml | indent 4 }} {{ end }}",
"name: \"{{ .SpecialVars.CurrentNode.HostName }}\" namespace: \"{{ .Spec.ClusterName }}\"",
"apiVersion: v1 data: AgentClusterInstall: |- siteconfig.open-cluster-management.io/sync-wave: \"1\" ClusterDeployment: |- siteconfig.open-cluster-management.io/sync-wave: \"1\" InfraEnv: |- siteconfig.open-cluster-management.io/sync-wave: \"2\" KlusterletAddonConfig: |- siteconfig.open-cluster-management.io/sync-wave: \"3\" ManagedCluster: |- siteconfig.open-cluster-management.io/sync-wave: \"3\" kind: ConfigMap metadata: name: assisted-installer-templates namespace: example-namespace",
"apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: \"example-sno\" namespace: \"example-sno\" spec: [...] clusterName: \"example-sno\" extraAnnotations: 1 ClusterDeployment: myClusterAnnotation: success extraLabels: 2 ManagedCluster: common: \"true\" group-du: \"\" nodes: - hostName: \"example-sno.example.redhat.com\" role: \"master\" extraAnnotations: 3 BareMetalHost: myNodeAnnotation: success extraLabels: 4 BareMetalHost: \"testExtraLabel\": \"success\"",
"get managedclusters example-sno -ojsonpath='{.metadata.labels}' | jq",
"{ \"common\": \"true\", \"group-du\": \"\", }",
"get bmh example-sno.example.redhat.com -n example-sno -ojsonpath='{.metadata.annotations}' | jq",
"{ \"myNodeAnnotation\": \"success\", }",
"patch multiclusterhubs.operator.open-cluster-management.io multiclusterhub -n rhacm --type json --patch '[{\"op\": \"add\", \"path\":\"/spec/overrides/components/-\", \"value\": {\"name\":\"siteconfig\",\"enabled\": true}}]'",
"-n rhacm get po | grep siteconfig",
"siteconfig-controller-manager-6fdd86cc64-sdg87 2/2 Running 0 43s",
"-n rhacm get cm",
"NAME DATA AGE ai-cluster-templates-v1 5 97s ai-node-templates-v1 2 97s ibi-cluster-templates-v1 3 97s ibi-node-templates-v1 3 97s",
"apiVersion: v1 kind: Namespace metadata: name: example-sno",
"apply -f clusterinstance-namespace.yaml",
"apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno 1 data: .dockerconfigjson: <encoded_docker_configuration> 2 type: kubernetes.io/dockerconfigjson",
"apply -f pull-secret.yaml",
"apiVersion: v1 data: password: <password> username: <username> kind: Secret metadata: name: example-bmh-secret namespace: \"example-sno\" 1 type: Opaque",
"apply -f example-bmc-secret.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: enable-crun namespace: example-sno 1 data: enable-crun-master.yaml: | apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-master spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/master: \"\" containerRuntimeConfig: defaultRuntime: crun enable-crun-worker.yaml: | apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" containerRuntimeConfig: defaultRuntime: crun",
"apply -f enable-crun.yaml",
"apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: \"example-clusterinstance\" namespace: \"example-sno\" 1 spec: holdInstallation: false extraManifestsRefs: 2 - name: extra-machine-configs - name: enable-crun pullSecretRef: name: \"pull-secret\" 3 [...] clusterName: \"example-sno\" 4 [...] clusterImageSetNameRef: \"img4.17-x86-64\" [...] templateRefs: 5 - name: ibi-cluster-templates-v1 namespace: rhacm [...] nodes: [...] bmcCredentialsName: 6 name: \"example-bmh-secret\" [...] templateRefs: 7 - name: ibi-node-templates-v1 namespace: rhacm [...]",
"apply -f clusterinstance-ibi.yaml",
"get clusterinstance <cluster_name> -n <target_namespace> -o yaml",
"message: Applied site config manifests reason: Completed status: \"True\" type: RenderedTemplatesApplied",
"get clusterinstance <cluster_name> -n <target_namespace> -o jsonpath='{.status.manifestsRendered}'",
"delete clusterinstance <cluster_name> -n <target_namespace>",
"get clusterinstance <cluster_name> -n <target_namespace>",
"Error from server (NotFound): clusterinstances.siteconfig.open-cluster-management.io \"<cluster_name>\" not found",
"apiVersion: v1 kind: ConfigMap metadata: name: my-custom-secret namespace: rhacm data: MySecret: |- apiVersion: v1 kind: Secret metadata: name: \"{{ .Spec.ClusterName }}-my-custom-secret-key\" namespace: \"clusters\" annotations: siteconfig.open-cluster-management.io/sync-wave: \"1\" 1 type: Opaque data: key: <key>",
"apply -f my-custom-secret.yaml",
"spec: templateRefs: - name: ai-cluster-templates-v1.yaml namespace: rhacm - name: my-custom-secret.yaml namespace: rhacm",
"apply -f clusterinstance-my-custom-secret.yaml",
"spec: nodes: - hostName: \"worker-node2.example.com\" role: \"worker\" ironicInspect: \"\" extraAnnotations: BareMetalHost: bmac.agent-install.openshift.io/remove-agent-and-node-on-delete: \"true\"",
"apply -f <clusterinstance>.yaml",
"get bmh -n <clusterinstance_namespace> worker-node2.example.com -ojsonpath='{.metadata.annotations}' | jq",
"{ \"baremetalhost.metal3.io/detached\": \"assisted-service-controller\", \"bmac.agent-install.openshift.io/hostname\": \"worker-node2.example.com\", \"bmac.agent-install.openshift.io/remove-agent-and-node-on-delete\": \"true\" \"bmac.agent-install.openshift.io/role\": \"master\", \"inspect.metal3.io\": \"disabled\", \"siteconfig.open-cluster-management.io/sync-wave\": \"1\", }",
"spec: nodes: - hostName: \"worker-node2.example.com\" pruneManifests: - apiVersion: metal3.io/v1alpha1 kind: BareMetalHost",
"apply -f <clusterinstance>.yaml",
"get bmh -n <clusterinstance_namespace> --watch --kubeconfig <hub_cluster_kubeconfig_filename>",
"NAME STATE CONSUMER ONLINE ERROR AGE master-node1.example.com provisioned true 81m worker-node2.example.com deprovisioning true 44m worker-node2.example.com powering off before delete true 20h worker-node2.example.com deleting true 50m",
"get agents -n <clusterinstance_namespace> --kubeconfig <hub_cluster_kubeconfig_filename>",
"NAME CLUSTER APPROVED ROLE STAGE master-node1.example.com <managed_cluster_name> true master Done master-node2.example.com <managed_cluster_name> true master Done master-node3.example.com <managed_cluster_name> true master Done worker-node1.example.com <managed_cluster_name> true worker Done",
"get nodes --kubeconfig <managed_cluster_kubeconfig_filename>",
"NAME STATUS ROLES AGE VERSION worker-node2.example.com NotReady,SchedulingDisabled worker 19h v1.30.5 worker-node1.example.com Ready worker 19h v1.30.5 master-node1.example.com Ready control-plane,master 19h v1.30.5 master-node2.example.com Ready control-plane,master 19h v1.30.5 master-node3.example.com Ready control-plane,master 19h v1.30.5",
"spec: nodes: - hostName: \"<host_name>\" role: \"worker\" templateRefs: - name: ai-node-templates-v1 namespace: rhacm bmcAddress: \"<bmc_address>\" bmcCredentialsName: name: \"<bmc_credentials_name>\" bootMACAddress: \"<boot_mac_address>\"",
"apply -f <clusterinstance>.yaml",
"get bmh -n <clusterinstance_namespace> --watch --kubeconfig <hub_cluster_kubeconfig_filename>",
"NAME STATE CONSUMER ONLINE ERROR AGE master-node1.example.com provisioned true 81m worker-node2.example.com provisioning true 44m",
"get agents -n <clusterinstance_namespace> --kubeconfig <hub_cluster_kubeconfig_filename>",
"NAME CLUSTER APPROVED ROLE STAGE master-node1.example.com <managed_cluster_name> true master Done master-node2.example.com <managed_cluster_name> true master Done master-node3.example.com <managed_cluster_name> true master Done worker-node1.example.com <managed_cluster_name> false worker worker-node2.example.com <managed_cluster_name> true worker Starting installation worker-node2.example.com <managed_cluster_name> true worker Installing worker-node2.example.com <managed_cluster_name> true worker Writing image to disk worker-node2.example.com <managed_cluster_name> true worker Waiting for control plane worker-node2.example.com <managed_cluster_name> true worker Rebooting worker-node2.example.com <managed_cluster_name> true worker Joined worker-node2.example.com <managed_cluster_name> true worker Done",
"get nodes --kubeconfig <managed_cluster_kubeconfig_filename>",
"NAME STATUS ROLES AGE VERSION worker-node2.example.com Ready worker 1h v1.30.5 worker-node1.example.com Ready worker 19h v1.30.5 master-node1.example.com Ready control-plane,master 19h v1.30.5 master-node2.example.com Ready control-plane,master 19h v1.30.5 master-node3.example.com Ready control-plane,master 19h v1.30.5"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html-single/multicluster_engine_operator_with_red_hat_advanced_cluster_management/index |
5.7. Active Directory Trust for Legacy Linux Clients | 5.7. Active Directory Trust for Legacy Linux Clients Linux clients running Red Hat Enterprise Linux with SSSD version 1.8 or earlier ( legacy clients ) do not provide native support for IdM cross-forest trusts with Active Directory. Therefore, for AD users to be able to access services provided by the IdM server, the legacy Linux clients and the IdM server have to be properly configured. Instead of using SSSD version 1.9 or later to communicate with the IdM server to obtain LDAP information, legacy clients use other utilities for this purpose, for example nss_ldap , nss-pam-ldapd , or SSSD version 1.8 or earlier. Clients running the following versions of Red Hat Enterprise Linux do not use SSSD 1.9 and are therefore considered to be legacy clients: Red Hat Enterprise Linux 5.7 or later Red Hat Enterprise Linux 6.0 - 6.3 Important Do not use the configuration described in this section for non-legacy clients, that is, clients running SSSD version 1.9 or later. SSSD 1.9 or later provides native support for IdM cross-forest trusts with AD, meaning AD users can properly access services on IdM clients without any additional configuration. When a legacy client joins the domain of an IdM server in a trust relationship with AD, a compat LDAP tree provides the required user and group data to AD users. However, the compat tree enables the AD users to access only a limited number of IdM services. Legacy clients do not provide access to the following services: Kerberos authentication host-based access control (HBAC) SELinux user mapping sudo rules Access to the following services is provided even in case of legacy clients: information look-up password authentication 5.7.1. Server-side Configuration for AD Trust for Legacy Clients Make sure the IdM server meets the following configuration requirements: The ipa-server package for IdM and the ipa-server-trust-ad package for the IdM trust add-on have been installed. The ipa-server-install utility has been run to set up the IdM server. The ipa-adtrust-install --enable-compat command has been run, which ensures that the IdM server supports trusts with AD domains and that the compat LDAP tree is available. If you have already run ipa-adtrust-install without the --enable-compat option in the past, run it again, this time adding --enable-compat . The ipa trust-add ad.example.org command has been run to establish the AD trust. If the host-based access control (HBAC) allow_all rule is disabled, enable the system-auth service on the IdM server, which allows authentication of the AD users. You can determine the current status of allow_all directly from the command line using the ipa hbacrule-show command. If the rule is disabled, Enabled: FALSE is displayed in the output: Note For information on disabling and enabling HBAC rules, see Configuring Host-Based Access Control in the Linux Domain Identity, Authentication, and Policy Guide . To enable system-auth on the IdM server, create an HBAC service named system-auth and add an HBAC rule using this service to grant access to IdM masters. Adding HBAC services and rules is described in Configuring Host-Based Access Control section in the Linux Domain Identity, Authentication, and Policy Guide . Note that HBAC services are PAM service names; if you add a new PAM service, make sure to create an HBAC service with the same name and then grant access to this service through HBAC rules. 5.7.2. Client-side Configuration Using the ipa-advise Utility The ipa-advise utility provides the configuration instructions to set up a legacy client for an AD trust. To display the complete list of scenarios for which ipa-advise can provide configuration instructions, run ipa-advise without any options. Running ipa-advise prints the names of all available sets of configuration instructions along with the descriptions of what each set does and when it is recommended to be used. To display a set of instructions, run the ipa-advise utility with an instruction set as a parameter: You can configure a Linux client using the ipa-advise utility by running the displayed instructions as a shell script or by executing the instructions manually. To run the instructions as a shell script: Create the script file. Add execute permissions to the file using the chmod utility. Copy the script to the client using the scp utility. Run the script on the client. Important Always read and review the script file carefully before you run it on the client. To configure the client manually, follow and execute the instructions displayed by ipa-advise from the command line. | [
"[user@server ~]USD kinit admin [user@server ~]USD ipa hbacrule-show allow_all Rule name: allow_all User category: all Host category: all Service category: all Description: Allow all users to access any host from any host Enabled: FALSE",
"ipa-advise config-redhat-nss-ldap : Instructions for configuring a system with nss-ldap as a IPA client. This set of instructions is targeted for platforms that include the authconfig utility, which are all Red Hat based platforms. config-redhat-nss-pam-ldapd : Instructions for configuring a system (...)",
"ipa-advise config-redhat-nss-ldap #!/bin/sh ---------------------------------------------------------------------- Instructions for configuring a system with nss-ldap as a IPA client. This set of instructions is targeted for platforms that include the authconfig utility, which are all Red Hat based platforms. ---------------------------------------------------------------------- Schema Compatibility plugin has not been configured on this server. To configure it, run \"ipa-adtrust-install --enable-compat\" Install required packages via yum install -y wget openssl nss_ldap authconfig NOTE: IPA certificate uses the SHA-256 hash function. SHA-256 was introduced in RHEL5.2. Therefore, clients older than RHEL5.2 will not be able to interoperate with IPA server 3.x. Please note that this script assumes /etc/openldap/cacerts as the default CA certificate location. If this value is different on your system the script needs to be modified accordingly. Download the CA certificate of the IPA server mkdir -p -m 755 /etc/openldap/cacerts wget http://idm.example.com/ipa/config/ca.crt -O /etc/openldap/cacerts/ca.crt (...)",
"ipa-advise config-redhat-nss-ldap > setup_script.sh",
"chmod +x setup_script.sh",
"scp setup_script.sh root@ client",
"./ setup_script.sh"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/windows_integration_guide/trust-legacy |
OpenShift Virtualization | OpenShift Virtualization OpenShift Container Platform 4.7 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team | [
"metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: \"true\"",
"oc edit configmap kubevirt-config -n openshift-cnv",
"kind: ConfigMap metadata: name: kubevirt-config data: default-cpu-model: \"<cpu-model>\" 1",
"ovirt-aaa-jdbc-tool user unlock admin",
"Memory overhead per infrastructure node ~ 150 MiB",
"Memory overhead per worker node ~ 360 MiB",
"Memory overhead per virtual machine ~ (1.002 * requested memory) + 146 MiB + 8 MiB * (number of vCPUs) \\ 1 + 16 MiB * (number of graphics devices) 2",
"CPU overhead for infrastructure nodes ~ 4 cores",
"CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine",
"Aggregated storage overhead per node ~ 10 GiB",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: \"stable\" config: 1",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: 1 workloads: nodePlacement:",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: 1",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: \"stable\" config: nodeSelector: example.io/example-infra-key: example-infra-value",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: \"stable\" config: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: nodeSelector: example.io/example-infra-key: example-infra-value workloads: nodePlacement: nodeSelector: example.io/example-workloads-key: example-workloads-value",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: infra: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-infra-key operator: In values: - example-infra-value workloads: nodePlacement: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: example.io/example-workloads-key operator: In values: - example-workloads-value preferredDuringSchedulingIgnoredDuringExecution: - weight: 1 preference: matchExpressions: - key: example.io/num-cpus operator: gt values: - 8",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: workloads: nodePlacement: tolerations: - key: \"key\" operator: \"Equal\" value: \"virtualization\" effect: \"NoSchedule\"",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"</path/to/backing/directory>\" useNamingPrefix: false workload: nodeSelector: example.io/example-workloads-key: example-workloads-value",
"apiVersion: v1 kind: Namespace metadata: name: openshift-cnv --- apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: kubevirt-hyperconverged-group namespace: openshift-cnv spec: targetNamespaces: - openshift-cnv --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: hco-operatorhub namespace: openshift-cnv spec: source: redhat-operators sourceNamespace: openshift-marketplace name: kubevirt-hyperconverged startingCSV: kubevirt-hyperconverged-operator.v2.6.10 channel: \"stable\" 1",
"oc apply -f <file name>.yaml",
"apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:",
"oc apply -f <file_name>.yaml",
"watch oc get csv -n openshift-cnv",
"NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v2.6.10 OpenShift Virtualization 2.6.10 Succeeded",
"tar -xvf <virtctl-version-distribution.arch>.tar.gz",
"chmod +x <virtctl-file-name>",
"echo USDPATH",
"subscription-manager repos --enable <repository>",
"yum install kubevirt-virtctl",
"oc delete apiservices v1alpha3.subresources.kubevirt.io -n openshift-cnv",
"oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv",
"oc delete subscription kubevirt-hyperconverged -n openshift-cnv",
"CSV_NAME=USD(oc get csv -n openshift-cnv -o=custom-columns=:metadata.name)",
"oc delete csv USD{CSV_NAME} -n openshift-cnv",
"clusterserviceversion.operators.coreos.com \"kubevirt-hyperconverged-operator.v2.6.10\" deleted",
"oc get csv -n openshift-cnv",
"oc get csv",
"VERSION REPLACES PHASE 2.5.0 kubevirt-hyperconverged-operator.v2.4.3 Installing 2.4.3 Replacing",
"oc get hco -n openshift-cnv kubevirt-hyperconverged -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'",
"ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully",
"oc get scc kubevirt-controller -o yaml",
"oc get clusterrole kubevirt-controller -o yaml",
"virtctl image-upload -h",
"apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: app: <vm_name> 1 name: <vm_name> spec: dataVolumeTemplates: - apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <vm_name> spec: sourceRef: kind: DataSource name: rhel9 namespace: openshift-virtualization-os-images storage: resources: requests: storage: 30Gi running: false template: metadata: labels: kubevirt.io/domain: <vm_name> spec: domain: cpu: cores: 1 sockets: 2 threads: 1 devices: disks: - disk: bus: virtio name: rootdisk - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default rng: {} features: smm: enabled: true firmware: bootloader: efi: {} resources: requests: memory: 8Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} volumes: - dataVolume: name: <vm_name> name: rootdisk - cloudInitNoCloud: userData: |- #cloud-config user: cloud-user password: '<password>' 2 chpasswd: { expire: False } name: cloudinitdisk",
"oc create -f <vm_manifest_file>.yaml",
"virtctl start <vm_name>",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine spec: RunStrategy: Always 1 template:",
"oc edit <object_type> <object_ID>",
"oc apply <object_type> <object_ID>",
"oc edit vm example",
"disks: - bootOrder: 1 1 disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk - cdrom: bus: virtio name: cd-drive-1 interfaces: - boot Order: 2 2 macAddress: '02:96:c4:00:00' masquerade: {} name: default",
"oc delete vm <vm_name>",
"oc get vmis",
"oc delete vmi <vmi_name>",
"remmina --connect /path/to/console.rdp",
"virtctl expose vm <fedora-vm> --port=22 --name=fedora-vm-ssh --type=NodePort 1",
"oc get svc",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE fedora-vm-ssh NodePort 127.0.0.1 <none> 22:32551/TCP 6s",
"ssh username@<node_IP_address> -p 32551",
"virtctl console <VMI>",
"virtctl vnc <VMI>",
"virtctl vnc <VMI> -v 4",
"oc login -u <user> https://<cluster.example.com>:8443",
"oc describe vmi <windows-vmi-name>",
"spec: networks: - name: default pod: {} - multus: networkName: cnv-bridge name: bridge-net status: interfaces: - interfaceName: eth0 ipAddress: 198.51.100.0/24 ipAddresses: 198.51.100.0/24 mac: a0:36:9f:0f:b1:70 name: default - interfaceName: eth1 ipAddress: 192.0.2.0/24 ipAddresses: 192.0.2.0/24 2001:db8::/32 mac: 00:17:a4:77:77:25 name: bridge-net",
"oc adm cordon <node_name>",
"oc adm drain <node_name> --force=true",
"oc delete node <node_name>",
"oc get vmis",
"yum install -y qemu-guest-agent",
"systemctl enable --now qemu-guest-agent",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"oc edit vm <vm-name>",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 1 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"oc edit vm <vm-name>",
"spec: domain: devices: disks: - name: virtiocontainerdisk bootOrder: 2 cdrom: bus: sata volumes: - containerDisk: image: container-native-virtualization/virtio-win name: virtiocontainerdisk",
"kubevirt_vm: namespace: name: cpu_cores: memory: disks: - name: volume: containerDisk: image: disk: bus:",
"kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio",
"kubevirt_vm: namespace: default name: vm1 state: running 1 cpu_cores: 1",
"ansible-playbook create-vm.yaml",
"(...) TASK [Create my first VM] ************************************************************************ changed: [localhost] PLAY RECAP ******************************************************************************************************** localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0",
"ansible-playbook create-vm.yaml",
"--- - name: Ansible Playbook 1 hosts: localhost connection: local tasks: - name: Create my first VM kubevirt_vm: namespace: default name: vm1 cpu_cores: 1 memory: 64Mi disks: - name: containerdisk volume: containerDisk: image: kubevirt/cirros-container-disk-demo:latest disk: bus: virtio",
"apiversion: kubevirt.io/v1 kind: VirtualMachineInstance metadata: labels: special: vmi-secureboot name: vmi-secureboot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk features: acpi: {} smm: enabled: true 1 firmware: bootloader: efi: secureBoot: true 2",
"oc create -f <file_name>.yaml",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: pxe-net-conf spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"pxe-net-conf\", \"plugins\": [ { \"type\": \"cnv-bridge\", \"bridge\": \"br1\", \"vlan\": 1 1 }, { \"type\": \"cnv-tuning\" 2 } ] }'",
"oc create -f pxe-net-conf.yaml",
"interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1",
"devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2",
"networks: - name: default pod: {} - name: pxe-net multus: networkName: pxe-net-conf",
"oc create -f vmi-pxe-boot.yaml",
"virtualmachineinstance.kubevirt.io \"vmi-pxe-boot\" created",
"oc get vmi vmi-pxe-boot -o yaml | grep -i phase phase: Running",
"virtctl vnc vmi-pxe-boot",
"virtctl console vmi-pxe-boot",
"ip addr",
"3. eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether de:00:00:00:00:de brd ff:ff:ff:ff:ff:ff",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: creationTimestamp: null labels: special: vmi-pxe-boot name: vmi-pxe-boot spec: domain: devices: disks: - disk: bus: virtio name: containerdisk bootOrder: 2 - disk: bus: virtio name: cloudinitdisk interfaces: - masquerade: {} name: default - bridge: {} name: pxe-net macAddress: de:00:00:00:00:de bootOrder: 1 machine: type: \"\" resources: requests: memory: 1024M networks: - name: default pod: {} - multus: networkName: pxe-net-conf name: pxe-net terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - cloudInitNoCloud: userData: | #!/bin/bash echo \"fedora\" | passwd fedora --stdin name: cloudinitdisk status: {}",
"kind: VirtualMachine spec: template: domain: resources: requests: memory: 1024M memory: guest: 2048M",
"oc create -f <file_name>.yaml",
"kind: VirtualMachine spec: template: domain: resources: overcommitGuestOverhead: true requests: memory: 1024M",
"oc create -f <file_name>.yaml",
"kind: VirtualMachine spec: domain: resources: requests: memory: \"4Gi\" 1 memory: hugepages: pageSize: \"1Gi\" 2",
"oc apply -f <virtual_machine>.yaml",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: name: myvmi spec: domain: cpu: features: - name: apic 1 policy: require 2",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: name: myvmi spec: domain: cpu: model: Conroe 1",
"apiVersion: kubevirt/v1alpha3 kind: VirtualMachineInstance metadata: name: myvmi spec: domain: cpu: model: host-model 1",
"oc get ns",
"oc create configmap <configmap-name> --from-file=</path/to/file/ca.pem>",
"apiVersion: v1 kind: ConfigMap metadata: name: tls-certs data: ca.pem: | -----BEGIN CERTIFICATE----- ... <base64 encoded cert> -----END CERTIFICATE-----",
"apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3",
"oc apply -f endpoint-secret.yaml",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume name: vm-fedora-datavolume 1 spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: fedora-dv 2 spec: storage: resources: requests: storage: 10Gi storageClassName: local source: http: 3 url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 4 secretRef: endpoint-secret 5 certConfigMap: \"\" 6 status: {} running: true template: metadata: creationTimestamp: null labels: kubevirt.io/vm: vm-fedora-datavolume spec: domain: devices: disks: - disk: bus: virtio name: datavolumedisk1 machine: type: \"\" resources: requests: memory: 1.5Gi terminationGracePeriodSeconds: 60 volumes: - dataVolume: name: fedora-dv name: datavolumedisk1 status: {}",
"oc create -f vm-fedora-datavolume.yaml",
"oc get pods",
"oc describe dv fedora-dv 1",
"virtctl console vm-fedora-datavolume",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: v1 kind: Secret metadata: name: endpoint-secret 1 labels: app: containerized-data-importer type: Opaque data: accessKeyId: \"\" 2 secretKey: \"\" 3",
"oc apply -f endpoint-secret.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: import-pv-datavolume 1 spec: storageClassName: local 2 source: http: url: \"https://mirror.arizona.edu/fedora/linux/releases/35/Cloud/x86_64/images/Fedora-Cloud-Base-35-1.2.x86_64.qcow2\" 3 secretRef: endpoint-secret 4 storage: volumeMode: Block 5 resources: requests: storage: 10Gi",
"oc create -f import-pv-datavolume.yaml",
"openssl s_client -connect <RHV_Manager_FQDN>:443 -showcerts < /dev/null",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: rhv-credentials namespace: default 1 type: Opaque stringData: ovirt: | apiUrl: <api_endpoint> 2 username: ocpadmin@internal password: 3 caCert: | -----BEGIN CERTIFICATE----- 4 -----END CERTIFICATE----- EOF",
"openssl s_client -connect :443 -showcerts < /dev/null",
"cat <<EOF | kubectl create -f - apiVersion: v2v.kubevirt.io/v1alpha1 kind: ResourceMapping metadata: name: resourcemapping_example namespace: default spec: ovirt: networkMappings: - source: name: <rhv_logical_network>/<vnic_profile> 1 target: name: <target_network> 2 type: pod storageMappings: 3 - source: name: <rhv_storage_domain> 4 target: name: <target_storage_class> 5 volumeMode: <volume_mode> 6 EOF",
"cat <<EOF | oc create -f - apiVersion: v2v.kubevirt.io/v1beta1 kind: VirtualMachineImport metadata: name: vm-import namespace: default spec: providerCredentialsSecret: name: rhv-credentials namespace: default resourceMapping: 1 name: resourcemapping-example namespace: default targetVmName: vm_example 2 startVm: true source: ovirt: vm: id: <source_vm_id> 3 name: <source_vm_name> 4 cluster: name: <source_cluster_name> 5 mappings: 6 networkMappings: - source: name: <source_logical_network>/<vnic_profile> 7 target: name: <target_network> 8 type: pod storageMappings: 9 - source: name: <source_storage_domain> 10 target: name: <target_storage_class> 11 accessMode: <volume_access_mode> 12 diskMappings: - source: id: <source_vm_disk_id> 13 target: name: <target_storage_class> 14 EOF",
"oc get vmimports vm-import -n default",
"status: conditions: - lastHeartbeatTime: \"2020-07-22T08:58:52Z\" lastTransitionTime: \"2020-07-22T08:58:52Z\" message: Validation completed successfully reason: ValidationCompleted status: \"True\" type: Valid - lastHeartbeatTime: \"2020-07-22T08:58:52Z\" lastTransitionTime: \"2020-07-22T08:58:52Z\" message: 'VM specifies IO Threads: 1, VM has NUMA tune mode specified: interleave' reason: MappingRulesVerificationReportedWarnings status: \"True\" type: MappingRulesVerified - lastHeartbeatTime: \"2020-07-22T08:58:56Z\" lastTransitionTime: \"2020-07-22T08:58:52Z\" message: Copying virtual machine disks reason: CopyingDisks status: \"True\" type: Processing dataVolumes: - name: fedora32-b870c429-11e0-4630-b3df-21da551a48c0 targetVmName: fedora32",
"<os> <type>rhel_8x64</type> </os>",
"oc get templates -n openshift --show-labels | tr ',' '\\n' | grep os.template.kubevirt.io | sed -r 's#os.template.kubevirt.io/(.*)=.*#\\1#g' | sort -u",
"fedora31 fedora32 rhel8.1 rhel8.2",
"cat <<EOF | oc create -f - apiVersion: v1 kind: ConfigMap metadata: name: os-configmap namespace: default 1 data: guestos2common: | \"Red Hat Enterprise Linux Server\": \"rhel\" \"CentOS Linux\": \"centos\" \"Fedora\": \"fedora\" \"Ubuntu\": \"ubuntu\" \"openSUSE\": \"opensuse\" osinfo2common: | \"<rhv-operating-system>\": \"<vm-template>\" 2 EOF",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: ConfigMap metadata: name: os-configmap namespace: default data: osinfo2common: | \"other_linux\": \"fedora31\" EOF",
"oc get cm -n default os-configmap -o yaml",
"oc patch configmap vm-import-controller-config -n openshift-cnv --patch '{ \"data\": { \"osConfigMap.name\": \"os-configmap\", \"osConfigMap.namespace\": \"default\" 1 } }'",
"oc get pods -n <namespace> | grep import 1",
"vm-import-controller-f66f7d-zqkz7 1/1 Running 0 4h49m",
"oc logs <vm-import-controller-f66f7d-zqkz7> -f -n <namespace> 1",
"Failed to bind volumes: provisioning failed for PVC",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc get nodes",
"oc debug nodes/<node_name>",
"sh-4.2# chroot /host",
"sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443",
"sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000",
"Login Succeeded!",
"sh-4.2# podman pull name.io/image",
"sh-4.2# podman tag name.io/image image-registry.openshift-image-registry.svc:5000/openshift/image",
"sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/image",
"oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge",
"HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')",
"podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1",
"oc create secret tls public-route-tls -n openshift-image-registry --cert=</path/to/tls.crt> --key=</path/to/tls.key>",
"spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls",
"oc create configmap registry-cas -n openshift-config --from-file=myregistry.corp.com..5000=/etc/docker/certs.d/myregistry.corp.com:5000/ca.crt --from-file=otherregistry.com=/etc/docker/certs.d/otherregistry.com/ca.crt",
"oc patch image.config.openshift.io/cluster --patch '{\"spec\":{\"additionalTrustedCA\":{\"name\":\"registry-cas\"}}}' --type=merge",
"oc create secret generic <pull_secret_name> --from-file=.dockercfg=<path/to/.dockercfg> --type=kubernetes.io/dockercfg",
"oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<path/to/.docker/config.json> --type=kubernetes.io/dockerconfigjson",
"oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<user_name> --docker-password=<password> --docker-email=<email>",
"oc secrets link default <pull_secret_name> --for=pull",
"mkdir /tmp/<dir_name> && cd /tmp/<dir_name>",
"tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz",
"cat > Dockerfile <<EOF FROM busybox:latest COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib RUN mkdir -p /opt ENTRYPOINT [\"cp\", \"-r\", \"/vmware-vix-disklib-distrib\", \"/opt\"] EOF",
"podman build . -t <registry_route_or_server_path>/vddk:<tag> 1",
"podman push <registry_route_or_server_path>/vddk:<tag>",
"oc edit configmap v2v-vmware -n openshift-cnv",
"data: vddk-init-image: <registry_route_or_server_path>/vddk:<tag>",
"mv vmnic0 ifcfg-eth0 1",
"NAME=eth0 DEVICE=eth0",
"systemctl restart network",
"oc get pods -n <namespace> | grep v2v 1",
"kubevirt-v2v-conversion-f66f7d-zqkz7 1/1 Running 0 4h49m",
"oc logs <kubevirt-v2v-conversion-f66f7d-zqkz7> -f -n <namespace> 1",
"INFO - have error: ('virt-v2v error: internal error: invalid argument: libvirt domain 'v2v_migration_vm_1' is running or paused. It must be shut down in order to perform virt-v2v conversion',)\"",
"Could not load config map vmware-to-kubevirt-os in kube-public namespace Restricted Access: configmaps \"vmware-to-kubevirt-os\" is forbidden: User cannot get resource \"configmaps\" in API group \"\" in the namespace \"kube-public\"",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: <datavolume-cloner> 1 rules: - apiGroups: [\"cdi.kubevirt.io\"] resources: [\"datavolumes/source\"] verbs: [\"*\"]",
"oc create -f <datavolume-cloner.yaml> 1",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <allow-clone-to-user> 1 namespace: <Source namespace> 2 subjects: - kind: ServiceAccount name: default namespace: <Destination namespace> 3 roleRef: kind: ClusterRole name: datavolume-cloner 4 apiGroup: rbac.authorization.k8s.io",
"oc create -f <datavolume-cloner.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4",
"oc create -f <cloner-datavolume>.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: \"example-clone-dv\" spec: source: pvc: name: source-pvc namespace: example-ns pvc: accessModes: - ReadWriteOnce resources: requests: storage: \"1G\"",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-dv-clone name: vm-dv-clone 1 spec: running: false template: metadata: labels: kubevirt.io/vm: vm-dv-clone spec: domain: devices: disks: - disk: bus: virtio name: root-disk resources: requests: memory: 64M volumes: - dataVolume: name: favorite-clone name: root-disk dataVolumeTemplates: - metadata: name: favorite-clone spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi source: pvc: namespace: \"source-namespace\" name: \"my-favorite-vm-disk\"",
"oc create -f <vm-clone-datavolumetemplate>.yaml",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: labels: kubevirt.io/vm: example-vm name: example-vm spec: dataVolumeTemplates: - metadata: name: example-dv spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 1G source: http: url: \"\" 1 running: false template: metadata: labels: kubevirt.io/vm: example-vm spec: domain: cpu: cores: 1 devices: disks: - disk: bus: virtio name: example-dv-disk machine: type: q35 resources: requests: memory: 1G terminationGracePeriodSeconds: 0 volumes: - dataVolume: name: example-dv name: example-dv-disk",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 4 volumeMode: Block 5",
"oc create -f <cloner-datavolume>.yaml",
"kind: VirtualMachine spec: domain: devices: interfaces: - name: default masquerade: {} 1 ports: - port: 80 2 networks: - name: default pod: {}",
"oc create -f <vm-name>.yaml",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: name: vm-ephemeral namespace: example-namespace spec: running: false template: metadata: labels: special: key 1 spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - masquerade: {} name: default resources: requests: memory: 1024M networks: - name: default pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #!/bin/bash echo \"fedora\" | passwd fedora --stdin",
"apiVersion: v1 kind: Service metadata: name: vmservice 1 namespace: example-namespace 2 spec: ports: - port: 27017 protocol: TCP targetPort: 22 3 selector: special: key 4 type: ClusterIP 5",
"oc create -f <service_name>.yaml",
"oc get service -n example-namespace",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vmservice ClusterIP 172.30.3.149 <none> 27017/TCP 2m",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: name: vm-connect namespace: example-namespace spec: running: false template: spec: domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - masquerade: {} name: default resources: requests: memory: 1024M networks: - name: default pod: {} volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userData: | #!/bin/bash echo \"fedora\" | passwd fedora --stdin",
"oc create -f <file.yaml>",
"virtctl -n example-namespace console <new-vm-name>",
"ssh [email protected] -p 27017",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: desiredState: interfaces: - name: br1 2 description: Linux bridge with eth1 as a port 3 type: linux-bridge 4 state: up 5 ipv4: enabled: false 6 bridge: options: stp: enabled: false 7 port: - name: eth1 8",
"apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: <bridge-network> 1 annotations: k8s.v1.cni.cncf.io/resourceName: bridge.network.kubevirt.io/<bridge-interface> 2 spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"<bridge-network>\", 3 \"type\": \"cnv-bridge\", 4 \"bridge\": \"<bridge-interface>\", 5 \"macspoofchk\": true, 6 \"vlan\": 1 7 }'",
"oc create -f <network-attachment-definition.yaml> 1",
"oc get network-attachment-definition <bridge-network>",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: name: <example-vm> spec: template: spec: domain: devices: interfaces: - masquerade: {} name: <default> - bridge: {} name: <bridge-net> 1 networks: - name: <default> pod: {} - name: <bridge-net> 2 multus: networkName: <a-bridge-network> 3",
"oc apply -f <example-vm.yaml>",
"kind: VirtualMachine spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 dhcp4: true 2",
"kind: VirtualMachine spec: volumes: - cloudInitNoCloud: networkData: | version: 2 ethernets: eth1: 1 addresses: - 10.10.10.14/24 2",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: \"39824\" status: interfaces: 2 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: \"0000:18:00.0\" totalvfs: 8 vendor: 15b3 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: \"0000:18:00.1\" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: \"8086\" syncStatus: Succeeded",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 numVfs: <num> 7 nicSelector: 8 vendor: \"<vendor_code>\" 9 deviceID: \"<device_id>\" 10 pfNames: [\"<pf_name>\", ...] 11 rootDevices: [\"<pci_bus_id>\", \"...\"] 12 deviceType: vfio-pci 13 isRdma: false 14",
"oc create -f <name>-sriov-node-network.yaml",
"oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'",
"apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 linkState: <link_state> 7 maxTxRate: <max_tx_rate> 8 minTxRate: <min_rx_rate> 9 vlanQoS: <vlan_qos> 10 trust: \"<trust_vf>\" 11 capabilities: <capabilities> 12",
"oc create -f <name>-sriov-network.yaml",
"oc get net-attach-def -n <namespace>",
"kind: VirtualMachine spec: domain: devices: interfaces: - name: <default> 1 masquerade: {} 2 - name: <nic1> 3 sriov: {} networks: - name: <default> 4 pod: {} - name: <nic1> 5 multus: networkName: <sriov-network> 6",
"oc apply -f <vm-sriov.yaml> 1",
"oc describe vmi <vmi_name>",
"Interfaces: Interface Name: eth0 Ip Address: 10.244.0.37/24 Ip Addresses: 10.244.0.37/24 fe80::858:aff:fef4:25/64 Mac: 0a:58:0a:f4:00:25 Name: default Interface Name: v2 Ip Address: 1.1.1.7/24 Ip Addresses: 1.1.1.7/24 fe80::f4d9:70ff:fe13:9089/64 Mac: f6:d9:70:13:90:89 Interface Name: v1 Ip Address: 1.1.1.1/24 Ip Addresses: 1.1.1.1/24 1.1.1.2/24 1.1.1.4/24 2001:de7:0:f101::1/64 2001:db8:0:f101::1/64 fe80::1420:84ff:fe10:17aa/64 Mac: 16:20:84:10:17:aa",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io=allocate",
"oc label namespace <namespace1> <namespace2> mutatevirtualmachines.kubemacpool.io-",
"touch machineconfig.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: 50-set-selinux-for-hostpath-provisioner labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux chcon for hostpath provisioner Before=kubelet.service [Service] ExecStart=/usr/bin/chcon -Rt container_file_t <backing_directory_path> 1 [Install] WantedBy=multi-user.target enabled: true name: hostpath-provisioner.service",
"oc create -f machineconfig.yaml -n <namespace>",
"sudo chcon -t container_file_t -R <backing_directory_path>",
"touch hostpathprovisioner_cr.yaml",
"apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 kind: HostPathProvisioner metadata: name: hostpath-provisioner spec: imagePullPolicy: IfNotPresent pathConfig: path: \"<backing_directory_path>\" 1 useNamingPrefix: false 2 workload: 3",
"oc create -f hostpathprovisioner_cr.yaml -n openshift-cnv",
"touch storageclass.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: hostpath-provisioner 1 provisioner: kubevirt.io/hostpath-provisioner reclaimPolicy: Delete 2 volumeBindingMode: WaitForFirstConsumer 3",
"oc create -f storageclass.yaml",
"oc edit cdi",
"spec: config: filesystemOverhead: global: \"<new_global_value>\" 1 storageClass: <storage_class_name>: \"<new_value_for_this_storage_class>\" 2",
"oc get cdi -o yaml",
"oc edit cdi",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: CDI spec: config: podResourceRequirements: limits: cpu: \"4\" memory: \"1Gi\" requests: cpu: \"1\" memory: \"250Mi\"",
"oc get cdi -o yaml",
"oc edit cdi",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: CDI metadata: spec: config: preallocation: true 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: preallocated-datavolume spec: source: 1 pvc: preallocation: true 2",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2",
"oc create -f <upload-datavolume>.yaml",
"virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3",
"oc get dvs",
"dd if=/dev/zero of=<loop10> bs=100M count=20",
"losetup </dev/loop10>d3 <loop10> 1 2",
"kind: PersistentVolume apiVersion: v1 metadata: name: <local-block-pv10> annotations: spec: local: path: </dev/loop10> 1 capacity: storage: <2Gi> volumeMode: Block 2 storageClassName: local 3 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <node01> 4",
"oc create -f <local-block-pv10.yaml> 1",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <upload-datavolume> 1 spec: source: upload: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: <2Gi> 2",
"oc create -f <upload-datavolume>.yaml",
"virtctl image-upload dv <datavolume_name> \\ 1 --size=<datavolume_size> \\ 2 --image-path=</path/to/image> \\ 3",
"oc get dvs",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: my-vmsnapshot 1 spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2",
"oc create -f <my-vmsnapshot>.yaml",
"oc describe vmsnapshot <my-vmsnapshot>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: creationTimestamp: \"2020-09-30T14:41:51Z\" finalizers: - snapshot.kubevirt.io/vmsnapshot-protection generation: 5 name: mysnap namespace: default resourceVersion: \"3897\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinesnapshots/my-vmsnapshot uid: 28eedf08-5d6a-42c1-969c-2eda58e2a78d spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm status: conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"False\" 1 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:42:03Z\" reason: Operation complete status: \"True\" 2 type: Ready creationTime: \"2020-09-30T14:42:03Z\" readyToUse: true 3 sourceUID: 355897f3-73a0-4ec4-83d3-3c2df9486f4f virtualMachineSnapshotContentName: vmsnapshot-content-28eedf08-5d6a-42c1-969c-2eda58e2a78d 4",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: name: my-vmrestore 1 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm 2 virtualMachineSnapshotName: my-vmsnapshot 3",
"oc create -f <my-vmrestore>.yaml",
"oc get vmrestore <my-vmrestore>",
"apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineRestore metadata: creationTimestamp: \"2020-09-30T14:46:27Z\" generation: 5 name: my-vmrestore namespace: default ownerReferences: - apiVersion: kubevirt.io/v1alpha3 blockOwnerDeletion: true controller: true kind: VirtualMachine name: my-vm uid: 355897f3-73a0-4ec4-83d3-3c2df9486f4f resourceVersion: \"5512\" selfLink: /apis/snapshot.kubevirt.io/v1alpha1/namespaces/default/virtualmachinerestores/my-vmrestore uid: 71c679a8-136e-46b0-b9b5-f57175a6a041 spec: target: apiGroup: kubevirt.io kind: VirtualMachine name: my-vm virtualMachineSnapshotName: my-vmsnapshot status: complete: true 1 conditions: - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"False\" 2 type: Progressing - lastProbeTime: null lastTransitionTime: \"2020-09-30T14:46:28Z\" reason: Operation complete status: \"True\" 3 type: Ready deletedDataVolumes: - test-dv1 restoreTime: \"2020-09-30T14:46:28Z\" restores: - dataVolumeName: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 persistentVolumeClaim: restore-71c679a8-136e-46b0-b9b5-f57175a6a041-datavolumedisk1 volumeName: datavolumedisk1 volumeSnapshotName: vmsnapshot-28eedf08-5d6a-42c1-969c-2eda58e2a78d-volume-datavolumedisk1",
"oc delete vmsnapshot <my-vmsnapshot>",
"oc get vmsnapshot",
"kind: PersistentVolume apiVersion: v1 metadata: name: <destination-pv> 1 annotations: spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi 2 local: path: /mnt/local-storage/local/disk1 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node01 4 persistentVolumeReclaimPolicy: Delete storageClassName: local volumeMode: Filesystem",
"oc get pv <destination-pv> -o yaml",
"spec: nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname 1 operator: In values: - node01 2",
"oc label pv <destination-pv> node=node01",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <clone-datavolume> 1 spec: source: pvc: name: \"<source-vm-disk>\" 2 namespace: \"<source-namespace>\" 3 pvc: accessModes: - ReadWriteOnce selector: matchLabels: node: node01 4 resources: requests: storage: <10Gi> 5",
"oc apply -f <clone-datavolume.yaml>",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: \"hostpath\" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"oc create -f <blank-image-datavolume>.yaml",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-image-datavolume spec: source: blank: {} pvc: # Optional: Set the storage class or omit to accept the default # storageClassName: \"hostpath\" accessModes: - ReadWriteOnce resources: requests: storage: 500Mi",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: <cloner-datavolume> 1 spec: source: pvc: namespace: \"<source-namespace>\" 2 name: \"<my-favorite-vm-disk>\" 3 pvc: accessModes: - ReadWriteMany resources: requests: storage: <2Gi> 4 volumeMode: Block 5",
"oc create -f <cloner-datavolume>.yaml",
"data: accessMode: ReadWriteOnce 1 volumeMode: Filesystem 2 <new>.accessMode: ReadWriteMany 3 <new>.volumeMode: Block 4",
"oc edit configmap kubevirt-storage-class-defaults -n openshift-cnv",
"data: accessMode: ReadWriteOnce 1 volumeMode: Filesystem 2 <new>.accessMode: ReadWriteMany 3 <new>.volumeMode: Block 4",
"kind: ConfigMap apiVersion: v1 metadata: name: kubevirt-storage-class-defaults namespace: openshift-cnv data: accessMode: ReadWriteOnce volumeMode: Filesystem nfs-sc.accessMode: ReadWriteMany nfs-sc.volumeMode: Filesystem block-sc.accessMode: ReadWriteMany block-sc.volumeMode: Block",
"cat > Dockerfile << EOF FROM registry.access.redhat.com/ubi8/ubi:latest AS builder ADD --chown=107:107 <vm_image>.qcow2 /disk/ 1 RUN chmod 0440 /disk/* FROM scratch COPY --from=builder /disk/* /disk/ EOF",
"podman build -t <registry>/<container_disk_name>:latest .",
"podman push <registry>/<container_disk_name>:latest",
"oc patch configmap cdi-insecure-registries -n openshift-cnv --type merge -p '{\"data\":{\"mykey\": \"<insecure-registry-host>:5000\"}}' 1",
"oc edit cdi",
"apiVersion: cdi.kubevirt.io/v1beta1 kind: CDI spec: config: scratchSpaceStorageClass: \"<storage_class>\" 1",
"oc get pv <pv_name> -o yaml | grep 'persistentVolumeReclaimPolicy'",
"oc patch pv <pv_name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'",
"oc describe pvc <pvc_name> | grep 'Mounted By:'",
"oc delete pvc <pvc_name>",
"oc get pv <pv_name> -o yaml > <file_name>.yaml",
"oc delete pv <pv_name>",
"rm -rf <path_to_share_storage>",
"oc create -f <new_pv_name>.yaml",
"oc get dvs",
"oc delete dv <datavolume_name>",
"oc patch -n openshift-cnv cm kubevirt-storage-class-defaults -p '{\"data\":{\"'USD<STORAGE_CLASS>'.accessMode\":\"ReadWriteMany\"}}'",
"oc edit configmap kubevirt-config -n openshift-cnv",
"apiVersion: v1 data: default-network-interface: masquerade feature-gates: DataVolumes,SRIOV,LiveMigration,CPUManager,CPUNodeDiscovery,Sidecar,Snapshot migrations: |- parallelMigrationsPerCluster: \"5\" parallelOutboundMigrationsPerNode: \"2\" bandwidthPerMigration: \"64Mi\" completionTimeoutPerGiB: \"800\" progressTimeout: \"150\" machine-type: pc-q35-rhel8.3.0 selinuxLauncherType: virt_launcher.process smbios: |- Family: Red Hat Product: Container-native virtualization Manufacturer: Red Hat Sku: 2.6.0 Version: 2.6.0 kind: ConfigMap metadata: creationTimestamp: \"2021-03-26T18:01:04Z\" labels: app: kubevirt-hyperconverged name: kubevirt-config namespace: openshift-cnv resourceVersion: \"15371295\" selfLink: /api/v1/namespaces/openshift-cnv/configmaps/kubevirt-config uid: <uuid>",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstanceMigration metadata: name: migration-job spec: vmiName: vmi-fedora",
"oc create -f vmi-migrate.yaml",
"oc describe vmi vmi-fedora",
"Status: Conditions: Last Probe Time: <nil> Last Transition Time: <nil> Status: True Type: LiveMigratable Migration Method: LiveMigration Migration State: Completed: true End Timestamp: 2018-12-24T06:19:42Z Migration UID: d78c8962-0743-11e9-a540-fa163e0c69f1 Source Node: node2.example.com Start Timestamp: 2018-12-24T06:19:35Z Target Node: node1.example.com Target Node Address: 10.9.0.18:43891 Target Node Domain Detected: true",
"oc delete vmim migration-job",
"oc edit vm <custom-vm> -n <my-namespace>",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: name: custom-vm spec: template: spec: evictionStrategy: LiveMigrate",
"virtctl restart <custom-vm> -n <my-namespace>",
"oc adm cordon <node1>",
"oc adm drain <node1> --delete-emptydir-data --ignore-daemonsets=true --force",
"apiVersion: nodemaintenance.kubevirt.io/v1beta1 kind: NodeMaintenance metadata: name: maintenance-example 1 spec: nodeName: node-1.example.com 2 reason: \"Node maintenance\" 3",
"oc apply -f nodemaintenance-cr.yaml",
"oc describe node <node-name>",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeNotSchedulable 61m kubelet Node node-1.example.com status is now: NodeNotSchedulable",
"oc get NodeMaintenance -o yaml",
"apiVersion: v1 items: - apiVersion: nodemaintenance.kubevirt.io/v1beta1 kind: NodeMaintenance metadata: spec: nodeName: node-1.example.com reason: Node maintenance status: evictionPods: 3 1 pendingPods: - pod-example-workload-0 - httpd - httpd-manual phase: Running lastError: \"Last failure message\" 2 totalpods: 5",
"oc adm uncordon <node1>",
"oc delete -f nodemaintenance-cr.yaml",
"nodemaintenance.nodemaintenance.kubevirt.io \"maintenance-example\" deleted",
"apic clflush cmov cx16 cx8 de fpu fxsr lahf_lm lm mca mce mmx msr mtrr nx pae pat pge pni pse pse36 sep sse sse2 sse4.1 ssse3 syscall tsc",
"aes apic avx avx2 bmi1 bmi2 clflush cmov cx16 cx8 de erms fma fpu fsgsbase fxsr hle invpcid lahf_lm lm mca mce mmx movbe msr mtrr nx pae pat pcid pclmuldq pge pni popcnt pse pse36 rdtscp rtm sep smep sse sse2 sse4.1 sse4.2 ssse3 syscall tsc tsc-deadline x2apic xsave",
"aes avx avx2 bmi1 bmi2 erms fma fsgsbase hle invpcid movbe pcid pclmuldq popcnt rdtscp rtm sse4.2 tsc-deadline x2apic xsave",
"apiVersion: v1 kind: ConfigMap metadata: name: cpu-plugin-configmap 1 data: 2 cpu-plugin-configmap: obsoleteCPUs: 3 - \"486\" - \"pentium\" - \"pentium2\" - \"pentium3\" - \"pentiumpro\" minCPU: \"Penryn\" 4",
"oc get nns",
"oc get nns node01 -o yaml",
"apiVersion: nmstate.io/v1beta1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3",
"apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 4 type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: eth1",
"oc apply -f <br1-eth1-policy.yaml> 1",
"oc get nncp",
"oc get nncp <policy> -o yaml",
"oc get nnce",
"oc get nnce <node>.<policy> -o yaml",
"apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"oc apply -f <br1-eth1-policy.yaml> 1",
"apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11",
"apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9",
"apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond enslaving eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 slaves: 12 - eth1 - eth2 mtu: 1450 13",
"apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"interfaces: - name: bond10 description: Bonding eth2 and eth3 for Linux bridge type: bond state: up link-aggregation: slaves: - eth2 - eth3 - name: br1 description: Linux bridge on bond type: linux-bridge state: up bridge: port: - name: bond10",
"interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true",
"interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false",
"interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true",
"interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true",
"interfaces: dns-resolver: config: search: - example.com - example.org server: - 8.8.8.8",
"interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 1 prefix-length: 24 enabled: true routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254",
"apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01",
"oc apply -f ens01-bridge-testfail.yaml",
"nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail FailedToConfigure",
"oc get nnce",
"NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure",
"oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'",
"error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' '' libnmstate.error.NmstateVerificationError: desired ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: ens01 description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 current ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: [] description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 difference ========== --- desired +++ current @@ -13,8 +13,7 @@ hello-time: 2 max-age: 20 priority: 32768 - port: - - name: ens01 + port: [] description: Linux bridge with the wrong port ipv4: address: [] line 651, in _assert_interfaces_equal\\n current_state.interfaces[ifname],\\nlibnmstate.error.NmstateVerificationError:",
"oc get nns control-plane-1 -o yaml",
"- ipv4: name: ens1 state: up type: ethernet",
"oc edit nncp ens01-bridge-testfail",
"port: - name: ens1",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail SuccessfullyConfigured",
"oc logs <virt-launcher-name>",
"oc get events",
"oc describe vm <vm>",
"oc describe vmi <vmi>",
"oc describe pod virt-launcher-<name>",
"oc describe dv <DataVolume>",
"Status: Conditions: Last Heart Beat Time: 2020-07-15T03:58:24Z Last Transition Time: 2020-07-15T03:58:24Z Message: PVC win10-rootdisk Bound Reason: Bound Status: True Type: Bound Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Bound 24s datavolume-controller PVC example-dv Bound",
"Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Message: Import Complete Reason: Completed Status: False Type: Running Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Error 12s (x2 over 14s) datavolume-controller Unable to connect to http data source: expected status code 200, got 404. Status: 404 Not Found",
"Status: Conditions: Last Heart Beat Time: 2020-07-15T04:31:39Z Last Transition Time: 2020-07-15T04:31:39Z Status: True Type: Ready",
"spec: readinessProbe: httpGet: 1 port: 1500 2 path: /healthz 3 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 120 4 periodSeconds: 20 5 timeoutSeconds: 10 6 failureThreshold: 3 7 successThreshold: 3 8",
"oc create -f <file_name>.yaml",
"spec: readinessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 tcpSocket: 3 port: 1500 4 timeoutSeconds: 10 5",
"oc create -f <file_name>.yaml",
"spec: livenessProbe: initialDelaySeconds: 120 1 periodSeconds: 20 2 httpGet: 3 port: 1500 4 path: /healthz 5 httpHeaders: - name: Custom-Header value: Awesome timeoutSeconds: 10 6",
"oc create -f <file_name>.yaml",
"apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: labels: special: vmi-fedora name: vmi-fedora spec: domain: devices: disks: - disk: bus: virtio name: containerdisk - disk: bus: virtio name: cloudinitdisk resources: requests: memory: 1024M readinessProbe: httpGet: port: 1500 initialDelaySeconds: 120 periodSeconds: 20 timeoutSeconds: 10 failureThreshold: 3 successThreshold: 3 terminationGracePeriodSeconds: 0 volumes: - name: containerdisk containerDisk: image: kubevirt/fedora-cloud-registry-disk-demo - cloudInitNoCloud: userData: |- #cloud-config password: fedora chpasswd: { expire: False } bootcmd: - setenforce 0 - dnf install -y nmap-ncat - systemd-run --unit=httpserver nc -klp 1500 -e '/usr/bin/echo -e HTTP/1.1 200 OK\\\\n\\\\nHello World!' name: cloudinitdisk",
"oc adm must-gather --image-stream=openshift/must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v{HCOVersion}",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.6.10 -- <environment_variable_1> <environment_variable_2> <script_name>",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.6.10 -- NS=mynamespace VM=my-vm gather_vms_details 1",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.6.10 -- PROS=3 gather",
"oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel8:v2.6.10 -- gather_images"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/openshift_virtualization/index |
function::user_char | function::user_char Name function::user_char - Retrieves a char value stored in user space Synopsis Arguments addr the user space address to retrieve the char from Description Returns the char value from a given user space address. Returns zero when user space data is not accessible. | [
"user_char:long(addr:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-char |
Chapter 1. Introduction | Chapter 1. Introduction Past versions of Red Hat OpenStack Platform used services managed with Systemd. However, more recent version of OpenStack Platform now use containers to run services. Some administrators might not have a good understanding of how containerized OpenStack Platform services operate, and so this guide aims to help you understand OpenStack Platform container images and containerized services. This includes: How to obtain and modify container images How to manage containerized services in the overcloud Understanding how containers differ from Systemd services The main goal is to help you gain enough knowledge of containerized OpenStack Platform services to transition from a Systemd-based environment to a container-based environment. 1.1. Containerized services and Kolla Each of the main Red Hat OpenStack Platform (RHOSP) services run in containers. This provides a method to keep each service within its own isolated namespace separated from the host. This has the following effects: During deployment, RHOSP pulls and runs container images from the Red Hat Customer Portal. The podman command operates management functions, like starting and stopping services. To upgrade containers, you must pull new container images and replace the existing containers with newer versions. Red Hat OpenStack Platform uses a set of containers built and managed with the Kolla toolset. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/transitioning_to_containerized_services/assembly_introduction |
1.3. Cluster Infrastructure | 1.3. Cluster Infrastructure The High Availability Add-On cluster infrastructure provides the basic functions for a group of computers (called nodes or members ) to work together as a cluster. Once a cluster is formed using the cluster infrastructure, you can use other components to suit your clustering needs (for example, setting up a cluster for sharing files on a GFS2 file system or setting up service failover). The cluster infrastructure performs the following functions: Cluster management Lock management Fencing Cluster configuration management | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/high_availability_add-on_overview/s1-hasci-overview-cso |
Chapter 6. Deploying the Shared File Systems service with native CephFS | Chapter 6. Deploying the Shared File Systems service with native CephFS CephFS is the highly scalable, open-source, distributed file system component of Red Hat Ceph Storage, a unified distributed storage platform. Ceph Storage implements object, block, and file storage using Reliable Autonomic Distributed Object Store (RADOS). CephFS, which is POSIX compatible, provides file access to a Ceph Storage cluster. The Shared File Systems service (manila) enables users to create shares in CephFS and access them using the native Ceph FS protocol. The Shared File Systems service manages the life cycle of these shares from within OpenStack. With this release, director can deploy the Shared File Systems with a native CephFS back end on the overcloud. Important This chapter pertains to the deployment and use of native CephFS to provide a self-service Shared File Systems service in your Red Hat OpenStack Platform(RHOSP) cloud through the native CephFS NAS protocol. This type of deployment requires guest VM access to Ceph public network and infrastructure. Deploy native CephFS with trusted OpenStack Platform tenants only, because it requires a permissive trust model that is not suitable for general purpose OpenStack Platform deployments. For general purpose OpenStack Platform deployments that use a conventional tenant trust model, you can deploy CephFS through the NFS protocol. 6.1. CephFS with native driver The CephFS native driver combines the OpenStack Shared File Systems service (manila) and Red Hat Ceph Storage. When you use Red Hat OpenStack (RHOSP) director, the Controller nodes host the Ceph daemons, such as the manager, metadata servers (MDS), and monitors (MON) and the Shared File Systems services. Compute nodes can host one or more projects. Projects, which were formerly referred to as tenants, are represented in the following graphic by the white boxes. Projects contain user-managed VMs, which are represented by gray boxes with two NICs. To access the ceph and manila daemons projects, connect to the daemons over the public Ceph storage network. On this network, you can access data on the storage nodes provided by the Ceph Object Storage Daemons (OSDs). Instances, or virtual machines (VMs), that are hosted on the project boot with two NICs: one dedicated to the storage provider network and the second to project-owned routers to the external provider network. The storage provider network connects the VMs that run on the projects to the public Ceph storage network. The Ceph public network provides back-end access to the Ceph object storage nodes, metadata servers (MDS), and Controller nodes. Using the native driver, CephFS relies on cooperation with the clients and servers to enforce quotas, guarantee project isolation, and for security. CephFS with the native driver works well in an environment with trusted end users on a private cloud. This configuration requires software that is running under user control to cooperate and work correctly. 6.2. Native CephFS back-end security The native CephFS back end requires a permissive trust model for Red Hat OpenStack Platform (RHOSP) tenants. This trust model is not appropriate for general purpose OpenStack Platform clouds that deliberately block users from directly accessing the infrastructure behind the services that the OpenStack Platform provides. With native CephFS, user Compute instances connect directly to the Ceph public network where the Ceph service daemons are exposed. CephFS clients that run on user VMs interact cooperatively with the Ceph service daemons, and they interact directly with RADOS to read and write file data blocks. CephFS quotas, which enforce Shared File Systems (manila) share sizes, are enforced on the client side, such as on VMs that are owned by (RHOSP) users. The client side software on user VMs might not be current, which can leave critical cloud infrastructure vulnerable to malicious or inadvertently harmful software that targets the Ceph service ports. Deploy native CephFS as a back end only in environments in which trusted users keep client-side software up to date. Ensure that no software that can impact the Red Hat Ceph Storage infrastructure runs on your VMs. For a general purpose RHOSP deployment that serves many untrusted users, deploy CephFS through NFS. For more information about using CephFS through NFS, see Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . Users might not keep client-side software current, and they might fail to exclude harmful software from their VMs, but using CephFS through NFS, they only have access to the public side of an NFS server, not to the Ceph infrastructure itself. NFS does not require the same kind of cooperative client and, in the worst case, an attack from a user VM can damage the NFS gateway without damaging the Ceph Storage infrastructure behind it. You can expose the native CephFS back end to all trusted users, but you must enact the following security measures: Configure the storage network as a provider network. Impose role-based access control (RBAC) policies to secure the Storage provider network. Create a private share type. 6.3. Native CephFS deployment A typical native Ceph file system (CephFS) installation in a Red Hat OpenStack Platform (RHOSP) environment includes the following components: RHOSP Controller nodes that run containerized Ceph metadata server (MDS), Ceph monitor (MON) and Shared File Systems (manila) services. Some of these services can coexist on the same node or they can have one or more dedicated nodes. Ceph Storage cluster with containerized object storage daemons (OSDs) that run on Ceph Storage nodes. An isolated storage network that serves as the Ceph public network on which the clients can communicate with Ceph service daemons. To facilitate this, the storage network is made available as a provider network for users to connect their VMs and mount CephFS shares. Important You cannot use the Shared File Systems service (manila) with the CephFS native driver to serve shares to OpenShift Container Platform through Manila CSI, because Red Hat does not support this type of deployment. For more information, contact Red Hat Support. The Shared File Systems (manila) service provides APIs that allow the tenants to request file system shares, which are fulfilled by driver modules. The driver for Red Hat CephFS, manila.share.drivers.cephfs.driver.CephFSDriver , allows the Shared File Systems service to use native CephFS as a back end. You can install native CephFS in an integrated deployment managed by director. When director deploys the Shared File Systems service with a CephFS back end on the overcloud, it automatically creates the required data center storage network. However, you must create the corresponding storage provider network on the overcloud. For more information about network planning, see Overcloud networks in Director Installation and Usage . Although you can manually configure the Shared File Systems service by editing the /var/lib/config-data/puppet-generated/manila/etc/manila/manila.conf file for the node, any settings can be overwritten by the Red Hat OpenStack Platform director in future overcloud updates. Red Hat only supports deployments of the Shared File Systems service that are managed by director. 6.4. Requirements You can deploy a native CephFS back end with new or existing Red Hat OpenStack Platform (RHOSP) environments if you meet the following requirements: Use Red Hat OpenStack Platform version 17.0 or later. Configure a new Red Hat Ceph Storage cluster at the same time as the native CephFS back end. For information about how to deploy Ceph Storage, see Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director . Important The RHOSP Shared File Systems service (manila) with the native CephFS back end is supported for use with Red Hat Ceph Storage version 5.2 or later. For more information about how to determine the version of Ceph Storage installed on your system, see Red Hat Ceph Storage releases and corresponding Ceph package versions . Install the Shared File Systems service on a Controller node. This is the default behavior. Use only a single instance of a CephFS back end for the Shared File Systems service. 6.5. File shares File shares are handled differently between the Shared File Systems service (manila), Ceph File System (CephFS), and CephFS through NFS. The Shared File Systems service provides shares, where a share is an individual file system namespace and a unit of storage with a defined size. Shared file system storage inherently allows multiple clients to connect, read, and write data to any given share, but you must give each client access to the share through the Shared File Systems service access control APIs before they can connect. With CephFS, a share is considered a directory with a defined quota and a layout that points to a particular storage pool or namespace. CephFS quotas limit the size of a directory to the size share that the Shared File Systems service creates. Access to Ceph shares is determined by MDS authentication capabilities. With native CephFS, file shares are provisioned and accessed through the CephFS protocol. Access control is performed with a CephX authentication scheme that uses CephFS usernames. 6.6. Native CephFS isolated network Native CephFS deployments use the isolated storage network deployed by director as the Ceph public network. Clients use this network to communicate with various Ceph infrastructure service daemons. For more information about isolating networks, see Network isolation in Director Installation and Usage . 6.7. Deploying the native CephFS environment When you are ready to deploy the environment, use the openstack overcloud deploy command with the custom environments and roles required to configure the native CephFS back end. The openstack overcloud deploy command has the following options in addition to other required options. Action Option Additional Information Specify the network configuration with network_data.yaml [filename] -n /usr/share/openstack-tripleo-heat-templates/network_data.yaml You can use a custom environment file to override values for the default networks specified in this network data environment file. This is the default network data file that is available when you use isolated networks. You can omit this file from the openstack overcloud deploy command for brevity. Deploy the Ceph daemons. -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml Initiating overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Deploy the Ceph metadata server with ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml Initiating overcloud deployment in Deploying Red Hat Ceph Storage and Red Hat OpenStack Platform together with director Deploy the manila service with the native CephFS back end. -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml Environment file The following example shows an openstack overcloud deploy command that includes options to deploy a Ceph cluster, Ceph MDS, the native CephFS back end, and the networks required for the Ceph cluster: For more information about the openstack overcloud deploy command, see Provisioning and deploying your overcloud in Director Installation and Usage . 6.8. Native CephFS back-end environment file The environment file for defining a native CephFS back end, manila-cephfsnative-config.yaml is located in the following path of an undercloud node: /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml . The manila-cephfsnative-config.yaml environment file contains settings relevant to the deployment of the Shared File Systems service. The back end default settings should work for most environments. The example shows the default values that director uses during deployment of the Shared File Systems service: The parameter_defaults header signifies the start of the configuration. Specifically, settings under this header let you override default values set in resource_registry . This includes values set by OS::Tripleo::Services::ManilaBackendCephFs , which sets defaults for a CephFS back end. 1 ManilaCephFSBackendName sets the name of the manila configuration of your CephFS backend. In this case, the default back end name is cephfs . 2 ManilaCephFSDriverHandlesShareServers controls the lifecycle of the share server. When set to false , the driver does not handle the lifecycle. This is the only supported option for CephFS back ends. 3 ManilaCephFSCephFSAuthId defines the Ceph auth ID that the director creates for the manila service to access the Ceph cluster. 4 ManilaCephFSCephFSEnableSnapshots controls snapshot activation. Snapshots are supported With Ceph Storage 4.1 and later, but the value of this parameter defaults to false . You can set the value to true to ensure that the driver reports the snapshot_support capability to the manila scheduler. 5 ManilaCephFSCephVolumeMode controls the UNIX permissions to set against the manila share created on the native CephFS back end. The value defaults to 755 . 6 ManilaCephFSCephFSProtocolHelperType must be set to CEPHFS to use the native CephFS driver. For more information about environment files, see Environment Files in the Director Installation and Usage guide. | [
"[stack@undercloud ~]USD openstack overcloud deploy -n /usr/share/openstack-tripleo-heat-templates/network_data.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml",
"[stack@undercloud ~]USD cat /usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml A Heat environment file which can be used to enable a a Manila CephFS Native driver backend. resource_registry: OS::TripleO::Services::ManilaApi: ../deployment/manila/manila-api-container-puppet.yaml OS::TripleO::Services::ManilaScheduler: ../deployment/manila/manila-scheduler-container-puppet.yaml # Only manila-share is pacemaker managed: OS::TripleO::Services::ManilaShare: ../deployment/manila/manila-share-pacemaker-puppet.yaml OS::TripleO::Services::ManilaBackendCephFs: ../deployment/manila/manila-backend-cephfs.yaml parameter_defaults: ManilaCephFSBackendName: cephfs 1 ManilaCephFSDriverHandlesShareServers: false 2 ManilaCephFSCephFSAuthId: 'manila' 3 ManilaCephFSCephFSEnableSnapshots: true 4 ManilaCephFSCephVolumeMode: '0755' 5 # manila cephfs driver supports either native cephfs backend - 'CEPHFS' # (users mount shares directly from ceph cluster), or nfs-ganesha backend - # 'NFS' (users mount shares through nfs-ganesha server) ManilaCephFSCephFSProtocolHelperType: 'CEPHFS' 6"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_deploying-the-Shared-File-Systems-service-with-native-CephFS_deployingcontainerizedrhcs |
1.5. Shutting Down | 1.5. Shutting Down To shut down Red Hat Enterprise Linux, the root user may issue the /sbin/shutdown command. The shutdown man page has a complete list of options, but the two most common uses are: After shutting everything down, the -h option halts the machine, and the -r option reboots. PAM console users can use the reboot and halt commands to shut down the system while in runlevels 1 through 5. For more information about PAM console users, refer to Section 16.7, "PAM and Device Ownership" . If the computer does not power itself down, be careful not to turn off the computer until a message appears indicating that the system is halted. Failure to wait for this message can mean that not all the hard drive partitions are unmounted, which can lead to file system corruption. | [
"/sbin/shutdown -h now /sbin/shutdown -r now"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-boot-init-shutdown-shutdown |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Use the Create Issue form in Red Hat Jira to provide your feedback. The Jira issue is created in the Red Hat Satellite Jira project, where you can track its progress. Prerequisites Ensure you have registered a Red Hat account . Procedure Click the following link: Create Issue . If Jira displays a login error, log in and proceed after you are redirected to the form. Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create . | null | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_hammer_cli_tool/providing-feedback-on-red-hat-documentation_hammer-cli |
12.2. Required Packages | 12.2. Required Packages In addition to the standard packages required to run the Red Hat High Availability Add-On and the Red Hat Resilient Storage Add-On, running Samba with Red Hat Enterprise Linux clustering requires the following packages: ctdb samba samba-common samba-winbind-clients | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-samba-packages-ca |
Chapter 3. Installing Red Hat Ansible Automation Platform | Chapter 3. Installing Red Hat Ansible Automation Platform Ansible Automation Platform is a modular platform. You can deploy automation controller with other automation platform components, such as automation hub and Event-Driven Ansible controller. For more information about the components provided with Ansible Automation Platform, see Red Hat Ansible Automation Platform components in the Red Hat Ansible Automation Platform Planning Guide. There are several supported installation scenarios for Red Hat Ansible Automation Platform. To install Red Hat Ansible Automation Platform, you must edit the inventory file parameters to specify your installation scenario. You can use one of the following as a basis for your own inventory file: Single automation controller with external (installer managed) database Single automation controller and single automation hub with external (installer managed) database Single automation controller, single automation hub, and single event-driven ansible controller node with external (installer managed ) database 3.1. Editing the Red Hat Ansible Automation Platform installer inventory file You can use the Red Hat Ansible Automation Platform installer inventory file to specify your installation scenario. Procedure Navigate to the installer: [RPM installed package] USD cd /opt/ansible-automation-platform/installer/ [bundled installer] USD cd ansible-automation-platform-setup-bundle-<latest-version> [online installer] USD cd ansible-automation-platform-setup-<latest-version> Open the inventory file with a text editor. Edit inventory file parameters to specify your installation scenario. You can use one of the supported Installation scenario examples as the basis for your inventory file. Additional resources For a comprehensive list of pre-defined variables used in Ansible installation inventory files, see Inventory file variables . 3.2. Inventory file examples based on installation scenarios Red Hat supports several installation scenarios for Ansible Automation Platform. You can develop your own inventory files using the example files as a basis, or you can use the example closest to your preferred installation scenario. 3.2.1. Inventory file recommendations based on installation scenarios Before selecting your installation method for Ansible Automation Platform, review the following recommendations. Familiarity with these recommendations will streamline the installation process. For Red Hat Ansible Automation Platform or automation hub: Add an automation hub host in the [automationhub] group. Do not install automation controller and automation hub on the same node for versions of Ansible Automation Platform in a production or customer environment. This can cause contention issues and heavy resource use. Provide a reachable IP address or fully qualified domain name (FQDN) for the [automationhub] and [automationcontroller] hosts to ensure users can sync and install content from automation hub from a different node. The FQDN must not contain either the - or the _ symbols, as it will not be processed correctly. Do not use localhost . admin is the default user ID for the initial log in to Ansible Automation Platform and cannot be changed in the inventory file. Use of special characters for pg_password is limited. The ! , # , 0 and @ characters are supported. Use of other special characters can cause the setup to fail. Enter your Red Hat Registry Service Account credentials in registry_username and registry_password to link to the Red Hat container registry. The inventory file variables registry_username and registry_password are only required if a non-bundle installer is used. 3.2.1.1. Single automation controller with external (installer managed) database Use this example to populate the inventory file to install Red Hat Ansible Automation Platform. This installation inventory file includes a single automation controller node with an external database on a separate node. [automationcontroller] controller.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port=5432 pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key 3.2.1.2. Single automation controller and single automation hub with external (installer managed) database Use this example to populate the inventory file to deploy single instances of automation controller and automation hub with an external (installer managed) database. [automationcontroller] controller.example.com [automationhub] automationhub.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' # The default install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can # disable TLS enabled deployment. # # automationhub_disable_https = False # The default install will generate self-signed certificates for the Automation # Hub service. If you are providing valid certificate via automationhub_ssl_cert # and automationhub_ssl_key, one should toggle that value to True. # # automationhub_ssl_validate_certs = False # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in Automation Hub node # automationhub_ssl_cert=/path/to/automationhub.cert # automationhub_ssl_key=/path/to/automationhub.key # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key 3.2.1.2.1. Connecting automation hub to a Red Hat Single Sign-On environment You can configure the inventory file further to connect automation hub to a Red Hat Single Sign-On installation. You must configure a different set of variables when connecting to a Red Hat Single Sign-On installation managed by Ansible Automation Platform than when connecting to an external Red Hat Single Sign-On installation. For more information about these inventory variables, refer to the Installing and configuring central authentication for the Ansible Automation Platform . 3.2.1.3. High availability automation hub Use the following examples to populate the inventory file to install a highly available automation hub. This inventory file includes a highly available automation hub with a clustered setup. You can configure your HA deployment further to implement Red Hat Single Sign-On and enable a high availability deployment of automation hub on SELinux . Specify database host IP Specify the IP address for your database host, using the automation_pg_host and automation_pg_port inventory variables. For example: automationhub_pg_host='192.0.2.10' automationhub_pg_port=5432 Also specify the IP address for your database host in the [database] section, using the value in the automationhub_pg_host inventory variable: [database] 192.0.2.10 List all instances in a clustered setup If installing a clustered setup, replace localhost ansible_connection=local in the [automationhub] section with the hostname or IP of all instances. For example: [automationhub] automationhub1.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.18 automationhub2.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.20 automationhub3.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.22 steps Check that the following directives are present in /etc/pulp/settings.py in each of the private automation hub servers: USE_X_FORWARDED_PORT = True USE_X_FORWARDED_HOST = True Note If automationhub_main_url is not specified, the first node in the [automationhub] group will be used as default. 3.2.1.4. Enabling a high availability (HA) deployment of automation hub on SELinux You can configure the inventory file to enable high availability deployment of automation hub on SELinux. You must create two mount points for /var/lib/pulp and /var/lib/pulp/pulpcore_static , and then assign the appropriate SELinux contexts to each. Note You must add the context for /var/lib/pulp pulpcore_static and run the Ansible Automation Platform installer before adding the context for /var/lib/pulp . Prerequisites You have already configured a NFS export on your server. Procedure Create a mount point at /var/lib/pulp : USD mkdir /var/lib/pulp/ Open /etc/fstab using a text editor, then add the following values: srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:var_lib_t:s0" 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context="system_u:object_r:httpd_sys_content_rw_t:s0" 0 0 Run the reload systemd manager configuration command: USD systemctl daemon-reload Run the mount command for /var/lib/pulp : USD mount /var/lib/pulp Create a mount point at /var/lib/pulp/pulpcore_static : USD mkdir /var/lib/pulp/pulpcore_static Run the mount command: USD mount -a With the mount points set up, run the Ansible Automation Platform installer: USD setup.sh -- -b --become-user root After the installation is complete, unmount the /var/lib/pulp/ mount point. steps Apply the appropriate SELinux context . Configure the pulpcore.serivce . Additional Resources See the SELinux Requirements on the Pulp Project documentation for a list of SELinux contexts. See the Filesystem Layout for a full description of Pulp folders. 3.2.1.4.1. Configuring pulpcore.service After you have configured the inventory file, and applied the SELinux context, you now need to configure the pulp service. Procedure With the two mount points set up, shut down the Pulp service to configure pulpcore.service : USD systemctl stop pulpcore.service Edit pulpcore.service using systemctl : USD systemctl edit pulpcore.service Add the following entry to pulpcore.service to ensure that automation hub services starts only after starting the network and mounting the remote mount points: [Unit] After=network.target var-lib-pulp.mount Enable remote-fs.target : USD systemctl enable remote-fs.target Reboot the system: USD systemctl reboot Troubleshooting A bug in the pulpcore SELinux policies can cause the token authentication public/private keys in etc/pulp/certs/ to not have the proper SELinux labels, causing the pulp process to fail. When this occurs, run the following command to temporarily attach the proper labels: USD chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem Repeat this command to reattach the proper SELinux labels whenever you relabel your system. 3.2.1.4.2. Applying the SELinux context After you have configured the inventory file, you must now apply the context to enable the high availability (HA) deployment of automation hub on SELinux. Procedure Shut down the Pulp service: USD systemctl stop pulpcore.service Unmount /var/lib/pulp/pulpcore_static : USD umount /var/lib/pulp/pulpcore_static Unmount /var/lib/pulp/ : USD umount /var/lib/pulp/ Open /etc/fstab using a text editor, then replace the existing value for /var/lib/pulp with the following: srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context="system_u:object_r:pulpcore_var_lib_t:s0" 0 0 Run the mount command: USD mount -a 3.2.1.5. Configuring content signing on private automation hub To successfully sign and publish Ansible Certified Content Collections, you must configure private automation hub for signing. Prerequisites Your GnuPG key pairs have been securely set up and managed by your organization. Your public-private key pair has proper access for configuring content signing on private automation hub. Procedure Create a signing script that accepts only a filename. Note This script acts as the signing service and must generate an ascii-armored detached gpg signature for that file using the key specified through the PULP_SIGNING_KEY_FINGERPRINT environment variable. The script prints out a JSON structure with the following format. {"file": "filename", "signature": "filename.asc"} All the file names are relative paths inside the current working directory. The file name must remain the same for the detached signature. Example: The following script produces signatures for content: #!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH="USD1.asc" ADMIN_ID="USDPULP_SIGNING_KEY_FINGERPRINT" PASSWORD="password" # Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase \ USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID \ --armor --output USDSIGNATURE_PATH USDFILE_PATH # Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\"file\": \"USDFILE_PATH\", \"signature\": \"USDSIGNATURE_PATH\"} else exit USDSTATUS fi After you deploy a private automation hub with signing enabled to your Ansible Automation Platform cluster, new UI additions are displayed in collections. Review the Ansible Automation Platform installer inventory file for options that begin with automationhub_* . [all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh The two new keys ( automationhub_auto_sign_collections and automationhub_require_content_approval ) indicate that the collections must be signed and approved after they are uploaded to private automation hub. 3.2.1.6. LDAP configuration on private automation hub You must set the following six variables in your Red Hat Ansible Automation Platform installer inventory file to configure your private automation hub for LDAP authentication: automationhub_authentication_backend automationhub_ldap_server_uri automationhub_ldap_bind_dn automationhub_ldap_bind_password automationhub_ldap_user_search_base_dn automationhub_ldap_group_search_base_dn If any of these variables are missing, the Ansible Automation installer cannot complete the installation. 3.2.1.6.1. Setting up your inventory file variables When you configure your private automation hub with LDAP authentication, you must set the proper variables in your inventory files during the installation process. Procedure Access your inventory file according to the procedure in Editing the Red Hat Ansible Automation Platform installer inventory file . Use the following example as a guide to set up your Ansible Automation Platform inventory file: automationhub_authentication_backend = "ldap" automationhub_ldap_server_uri = "ldap://ldap:389" (for LDAPs use automationhub_ldap_server_uri = "ldaps://ldap-server-fqdn") automationhub_ldap_bind_dn = "cn=admin,dc=ansible,dc=com" automationhub_ldap_bind_password = "GoodNewsEveryone" automationhub_ldap_user_search_base_dn = "ou=people,dc=ansible,dc=com" automationhub_ldap_group_search_base_dn = "ou=people,dc=ansible,dc=com" Note The following variables will be set with default values, unless you set them with other options. auth_ldap_user_search_scope= 'SUBTREE' auth_ldap_user_search_filter= '(uid=%(user)s)' auth_ldap_group_search_scope= 'SUBTREE' auth_ldap_group_search_filter= '(objectClass=Group)' auth_ldap_group_type_class= 'django_auth_ldap.config:GroupOfNamesType' Optional: Set up extra parameters in your private automation hub such as user groups, superuser access, or mirroring. Go to Configuring extra LDAP parameters to complete this optional step. 3.2.1.6.2. Configuring extra LDAP parameters If you plan to set up superuser access, user groups, mirroring or other extra parameters, you can create a YAML file that comprises them in your ldap_extra_settings dictionary. Procedure Create a YAML file that contains ldap_extra_settings . Example: #ldapextras.yml --- ldap_extra_settings: <LDAP_parameter>: <Values> ... Add any parameters that you require for your setup. The following examples describe the LDAP parameters that you can set in ldap_extra_settings : Use this example to set up a superuser flag based on membership in an LDAP group. #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",} ... Use this example to set up superuser access. #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {"is_superuser": "cn=pah-admins,ou=groups,dc=example,dc=com",} ... Use this example to mirror all LDAP groups you belong to. #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_MIRROR_GROUPS: True ... Use this example to map LDAP user attributes (such as first name, last name, and email address of the user). #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_ATTR_MAP: {"first_name": "givenName", "last_name": "sn", "email": "mail",} ... Use the following examples to grant or deny access based on LDAP group membership: To grant private automation hub access (for example, members of the cn=pah-nosoupforyou,ou=groups,dc=example,dc=com group): #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_REQUIRE_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com' ... To deny private automation hub access (for example, members of the cn=pah-nosoupforyou,ou=groups,dc=example,dc=com group): #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_DENY_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com' ... Use this example to enable LDAP debug logging. #ldapextras.yml --- ldap_extra_settings: GALAXY_LDAP_LOGGING: True ... Note If it is not practical to re-run setup.sh or if debug logging is enabled for a short time, you can add a line containing GALAXY_LDAP_LOGGING: True manually to the /etc/pulp/settings.py file on private automation hub. Restart both pulpcore-api.service and nginx.service for the changes to take effect. To avoid failures due to human error, use this method only when necessary. Use this example to configure LDAP caching by setting the variable AUTH_LDAP_CACHE_TIMEOUT . #ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_CACHE_TIMEOUT: 3600 ... Run setup.sh -e @ldapextras.yml during private automation hub installation. .Verification To verify you have set up correctly, confirm you can view all of your settings in the /etc/pulp/settings.py file on your private automation hub. 3.2.1.6.3. LDAP referrals If your LDAP servers return referrals, you might have to disable referrals to successfully authenticate using LDAP on private automation hub. If not, the following message is returned: Operation unavailable without authentication To disable the LDAP REFERRALS lookup, set: GALAXY_LDAP_DISABLE_REFERRALS = true This sets AUTH_LDAP_CONNECTIONS_OPTIONS to the correct option. 3.2.1.7. Single automation controller, single automation hub, and single Event-Driven Ansible controller node with external (installer managed) database Use this example to populate the inventory file to deploy single instances of automation controller, automation hub, and Event-Driven Ansible controller with an external (installer managed) database. Important This scenario requires a minimum of automation controller 2.4 for successful deployment of Event-Driven Ansible controller. Event-Driven Ansible controller must be installed on a separate server and cannot be installed on the same host as automation hub and automation controller. Event-Driven Ansible controller cannot be installed in a high availability or clustered configuration. Ensure there is only one host entry in the automationedacontroller section of the inventory. When an Event-Driven Ansible rulebook is activated under standard conditions, it uses approximately 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of the rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that the maximum number of activations is based on the resource capacity. In the following example, the default automationedacontroller_max_running_activations setting is 12, but can be adjusted according to fit capacity. [automationcontroller] controller.example.com [automationhub] automationhub.example.com [automationedacontroller] automationedacontroller.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' # Automation hub configuration automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' # Automation Event-Driven Ansible controller configuration automationedacontroller_admin_password='<eda-password>' automationedacontroller_pg_host='data.example.com' automationedacontroller_pg_port=5432 automationedacontroller_pg_database='automationedacontroller' automationedacontroller_pg_username='automationedacontroller' automationedacontroller_pg_password='<password>' # Keystore file to install in SSO node # sso_custom_keystore_file='/path/to/sso.jks' # This install will deploy SSO with sso_use_https=True # Keystore password is required for https enabled SSO sso_keystore_password='' # This install will deploy a TLS enabled Automation Hub. # If for some reason this is not the behavior wanted one can # disable TLS enabled deployment. # # automationhub_disable_https = False # The default install will generate self-signed certificates for the Automation # Hub service. If you are providing valid certificate via automationhub_ssl_cert # and automationhub_ssl_key, one should toggle that value to True. # # automationhub_ssl_validate_certs = False # SSL-related variables # If set, this will install a custom CA certificate to the system trust store. # custom_ca_cert=/path/to/ca.crt # Certificate and key to install in Automation Hub node # automationhub_ssl_cert=/path/to/automationhub.cert # automationhub_ssl_key=/path/to/automationhub.key # Certificate and key to install in nginx for the web UI and API # web_server_ssl_cert=/path/to/tower.cert # web_server_ssl_key=/path/to/tower.key # Server-side SSL settings for PostgreSQL (when we are installing it). # postgres_use_ssl=False # postgres_ssl_cert=/path/to/pgsql.crt # postgres_ssl_key=/path/to/pgsql.key # Boolean flag used to verify Automation Controller's # web certificates when making calls from Automation Event-Driven Ansible controller. # automationedacontroller_controller_verify_ssl = true # # Certificate and key to install in Automation Event-Driven Ansible controller node # automationedacontroller_ssl_cert=/path/to/automationeda.crt # automationedacontroller_ssl_key=/path/to/automationeda.key 3.2.1.8. Adding a safe plugin variable to Event-Driven Ansible controller When using redhat.insights_eda or similar plug-ins to run rulebook activations in Event-Driven Ansible controller, you must add a safe plugin variable to a directory in Ansible Automation Platform. This would ensure connection between Event-Driven Ansible controller and the source plugin, and display port mappings correctly. Procedure Create a directory for the safe plugin variable: mkdir -p ./group_vars/automationedacontroller Create a file within that directory for your new setting (for example, touch ./group_vars/automationedacontroller/custom.yml ) Add the variable automationedacontroller_safe_plugins to the file with a comma-separated list of plugins to enable for Event-Driven Ansible controller. For example: automationedacontroller_safe_plugins: "ansible.eda.webhook, ansible.eda.alertmanager" 3.3. Running the Red Hat Ansible Automation Platform installer setup script After you update the inventory file with required parameters for installing your private automation hub, run the installer setup script. Procedure Run the setup.sh script USD sudo ./setup.sh Installation of Red Hat Ansible Automation Platform will begin. 3.4. Verifying installation of automation controller Verify that you installed automation controller successfully by logging in with the admin credentials you inserted in the inventory file. Prerequisite Port 443 is available Procedure Go to the IP address specified for the automation controller node in the inventory file. Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your manifest file. Log in with the user ID admin and the password credentials you set in the inventory file. Note The automation controller server is accessible from port 80 ( https://<CONTROLLER_SERVER_NAME>/ ) but redirects to port 443. Important If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal . Upon a successful log in to automation controller, your installation of Red Hat Ansible Automation Platform 2.4 is complete. 3.4.1. Additional automation controller configuration and resources See the following resources to explore additional automation controller configurations. Table 3.1. Resources to configure automation controller Resource link Description Getting started with automation controller Set up automation controller and run your first playbook. Automation controller administration guide Configure automation controller administration through customer scripts, management jobs, etc. Red Hat Ansible Automation Platform operations guide Set up automation controller with a proxy server. Managing usability analytics and data collection from automation controller Manage what automation controller information you share with Red Hat. Automation controller user guide Review automation controller functionality in more detail. 3.5. Verifying installation of automation hub Verify that you installed your automation hub successfully by logging in with the admin credentials you inserted into the inventory file. Procedure Navigate to the IP address specified for the automation hub node in the inventory file. Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your manifest file. Log in with the user ID admin and the password credentials you set in the inventory file. Important If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal . Upon a successful login to automation hub, your installation of Red Hat Ansible Automation Platform 2.4 is complete. 3.5.1. Additional automation hub configuration and resources See the following resources to explore additional automation hub configurations. Table 3.2. Resources to configure automation controller Resource link Description Managing user access in private automation hub Configure user access for automation hub. Managing Red Hat Certified, validated, and Ansible Galaxy content in automation hub Add content to your automation hub. Publishing proprietary content collections in automation hub Publish internally developed collections on your automation hub. 3.6. Verifying Event-Driven Ansible controller installation Verify that you installed Event-Driven Ansible controller successfully by logging in with the admin credentials you inserted in the inventory file. Procedure Navigate to the IP address specified for the Event-Driven Ansible controller node in the inventory file. Enter your Red Hat Satellite credentials. If this is your first time logging in after installation, upload your manifest file. Log in with the user ID admin and the password credentials you set in the inventory file. Important If the installation fails and you are a customer who has purchased a valid license for Red Hat Ansible Automation Platform, contact Ansible through the Red Hat Customer portal . Upon a successful login to Event-Driven Ansible controller, your installation of Red Hat Ansible Automation Platform 2.4 is complete. | [
"cd /opt/ansible-automation-platform/installer/",
"cd ansible-automation-platform-setup-bundle-<latest-version>",
"cd ansible-automation-platform-setup-<latest-version>",
"[automationcontroller] controller.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port=5432 pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key",
"[automationcontroller] controller.example.com [automationhub] automationhub.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' The default install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key",
"automationhub_pg_host='192.0.2.10' automationhub_pg_port=5432",
"[database] 192.0.2.10",
"[automationhub] automationhub1.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.18 automationhub2.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.20 automationhub3.testing.ansible.com ansible_user=cloud-user ansible_host=192.0.2.22",
"USE_X_FORWARDED_PORT = True USE_X_FORWARDED_HOST = True",
"mkdir /var/lib/pulp/",
"srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context=\"system_u:object_r:var_lib_t:s0\" 0 0 srv_rhel8:/data/pulpcore_static /var/lib/pulp/pulpcore_static nfs defaults,_netdev,nosharecache,context=\"system_u:object_r:httpd_sys_content_rw_t:s0\" 0 0",
"systemctl daemon-reload",
"mount /var/lib/pulp",
"mkdir /var/lib/pulp/pulpcore_static",
"mount -a",
"setup.sh -- -b --become-user root",
"systemctl stop pulpcore.service",
"systemctl edit pulpcore.service",
"[Unit] After=network.target var-lib-pulp.mount",
"systemctl enable remote-fs.target",
"systemctl reboot",
"chcon system_u:object_r:pulpcore_etc_t:s0 /etc/pulp/certs/token_{private,public}_key.pem",
"systemctl stop pulpcore.service",
"umount /var/lib/pulp/pulpcore_static",
"umount /var/lib/pulp/",
"srv_rhel8:/data /var/lib/pulp nfs defaults,_netdev,nosharecache,context=\"system_u:object_r:pulpcore_var_lib_t:s0\" 0 0",
"mount -a",
"{\"file\": \"filename\", \"signature\": \"filename.asc\"}",
"#!/usr/bin/env bash FILE_PATH=USD1 SIGNATURE_PATH=\"USD1.asc\" ADMIN_ID=\"USDPULP_SIGNING_KEY_FINGERPRINT\" PASSWORD=\"password\" Create a detached signature gpg --quiet --batch --pinentry-mode loopback --yes --passphrase USDPASSWORD --homedir ~/.gnupg/ --detach-sign --default-key USDADMIN_ID --armor --output USDSIGNATURE_PATH USDFILE_PATH Check the exit status STATUS=USD? if [ USDSTATUS -eq 0 ]; then echo {\\\"file\\\": \\\"USDFILE_PATH\\\", \\\"signature\\\": \\\"USDSIGNATURE_PATH\\\"} else exit USDSTATUS fi",
"[all:vars] . . . automationhub_create_default_collection_signing_service = True automationhub_auto_sign_collections = True automationhub_require_content_approval = True automationhub_collection_signing_service_key = /abs/path/to/galaxy_signing_service.gpg automationhub_collection_signing_service_script = /abs/path/to/collection_signing.sh",
"automationhub_authentication_backend = \"ldap\" automationhub_ldap_server_uri = \"ldap://ldap:389\" (for LDAPs use automationhub_ldap_server_uri = \"ldaps://ldap-server-fqdn\") automationhub_ldap_bind_dn = \"cn=admin,dc=ansible,dc=com\" automationhub_ldap_bind_password = \"GoodNewsEveryone\" automationhub_ldap_user_search_base_dn = \"ou=people,dc=ansible,dc=com\" automationhub_ldap_group_search_base_dn = \"ou=people,dc=ansible,dc=com\"",
"auth_ldap_user_search_scope= 'SUBTREE' auth_ldap_user_search_filter= '(uid=%(user)s)' auth_ldap_group_search_scope= 'SUBTREE' auth_ldap_group_search_filter= '(objectClass=Group)' auth_ldap_group_type_class= 'django_auth_ldap.config:GroupOfNamesType'",
"#ldapextras.yml --- ldap_extra_settings: <LDAP_parameter>: <Values>",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {\"is_superuser\": \"cn=pah-admins,ou=groups,dc=example,dc=com\",}",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_FLAGS_BY_GROUP: {\"is_superuser\": \"cn=pah-admins,ou=groups,dc=example,dc=com\",}",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_MIRROR_GROUPS: True",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_USER_ATTR_MAP: {\"first_name\": \"givenName\", \"last_name\": \"sn\", \"email\": \"mail\",}",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_REQUIRE_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com'",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_DENY_GROUP: 'cn=pah-nosoupforyou,ou=groups,dc=example,dc=com'",
"#ldapextras.yml --- ldap_extra_settings: GALAXY_LDAP_LOGGING: True",
"#ldapextras.yml --- ldap_extra_settings: AUTH_LDAP_CACHE_TIMEOUT: 3600",
"Operation unavailable without authentication",
"GALAXY_LDAP_DISABLE_REFERRALS = true",
"[automationcontroller] controller.example.com [automationhub] automationhub.example.com [automationedacontroller] automationedacontroller.example.com [database] data.example.com [all:vars] admin_password='<password>' pg_host='data.example.com' pg_port='5432' pg_database='awx' pg_username='awx' pg_password='<password>' pg_sslmode='prefer' # set to 'verify-full' for client-side enforced SSL registry_url='registry.redhat.io' registry_username='<registry username>' registry_password='<registry password>' Automation hub configuration automationhub_admin_password= <PASSWORD> automationhub_pg_host='data.example.com' automationhub_pg_port=5432 automationhub_pg_database='automationhub' automationhub_pg_username='automationhub' automationhub_pg_password=<PASSWORD> automationhub_pg_sslmode='prefer' Automation Event-Driven Ansible controller configuration automationedacontroller_admin_password='<eda-password>' automationedacontroller_pg_host='data.example.com' automationedacontroller_pg_port=5432 automationedacontroller_pg_database='automationedacontroller' automationedacontroller_pg_username='automationedacontroller' automationedacontroller_pg_password='<password>' Keystore file to install in SSO node sso_custom_keystore_file='/path/to/sso.jks' This install will deploy SSO with sso_use_https=True Keystore password is required for https enabled SSO sso_keystore_password='' This install will deploy a TLS enabled Automation Hub. If for some reason this is not the behavior wanted one can disable TLS enabled deployment. # automationhub_disable_https = False The default install will generate self-signed certificates for the Automation Hub service. If you are providing valid certificate via automationhub_ssl_cert and automationhub_ssl_key, one should toggle that value to True. # automationhub_ssl_validate_certs = False SSL-related variables If set, this will install a custom CA certificate to the system trust store. custom_ca_cert=/path/to/ca.crt Certificate and key to install in Automation Hub node automationhub_ssl_cert=/path/to/automationhub.cert automationhub_ssl_key=/path/to/automationhub.key Certificate and key to install in nginx for the web UI and API web_server_ssl_cert=/path/to/tower.cert web_server_ssl_key=/path/to/tower.key Server-side SSL settings for PostgreSQL (when we are installing it). postgres_use_ssl=False postgres_ssl_cert=/path/to/pgsql.crt postgres_ssl_key=/path/to/pgsql.key Boolean flag used to verify Automation Controller's web certificates when making calls from Automation Event-Driven Ansible controller. automationedacontroller_controller_verify_ssl = true # Certificate and key to install in Automation Event-Driven Ansible controller node automationedacontroller_ssl_cert=/path/to/automationeda.crt automationedacontroller_ssl_key=/path/to/automationeda.key",
"automationedacontroller_safe_plugins: \"ansible.eda.webhook, ansible.eda.alertmanager\"",
"sudo ./setup.sh"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_installation_guide/assembly-platform-install-scenario |
Chapter 5. Using the Red Hat build of OpenTelemetry | Chapter 5. Using the Red Hat build of OpenTelemetry You can set up and use the Red Hat build of OpenTelemetry to send traces to the OpenTelemetry Collector or the TempoStack. 5.1. Forwarding traces to a TempoStack by using the OpenTelemetry Collector To configure forwarding traces to a TempoStack, you can deploy and configure the OpenTelemetry Collector. You can deploy the OpenTelemetry Collector in the deployment mode by using the specified processors, receivers, and exporters. For other modes, see the OpenTelemetry Collector documentation linked in Additional resources . Prerequisites The Red Hat build of OpenTelemetry Operator is installed. The Tempo Operator is installed. A TempoStack is deployed on the cluster. Procedure Create a service account for the OpenTelemetry Collector. Example ServiceAccount apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment Create a cluster role for the service account. Example ClusterRole apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The k8sattributesprocessor requires permissions for pods and namespaces resources. 2 The resourcedetectionprocessor requires permissions for infrastructures and status. Bind the cluster role to the service account. Example ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the YAML file to define the OpenTelemetryCollector custom resource (CR). Example OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: | receivers: jaeger: protocols: grpc: thrift_binary: thrift_compact: thrift_http: opencensus: otlp: protocols: grpc: http: zipkin: processors: batch: k8sattributes: memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-simplest-distributor:4317" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] 1 The Collector exporter is configured to export OTLP and points to the Tempo distributor endpoint, "tempo-simplest-distributor:4317" in this example, which is already created. 2 The Collector is configured with a receiver for Jaeger traces, OpenCensus traces over the OpenCensus protocol, Zipkin traces over the Zipkin protocol, and OTLP traces over the GRPC protocol. Tip You can deploy tracegen as a test: apiVersion: batch/v1 kind: Job metadata: name: tracegen spec: template: spec: containers: - name: tracegen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/tracegen:latest command: - "./tracegen" args: - -otlp-endpoint=otel-collector:4317 - -otlp-insecure - -duration=30s - -workers=1 restartPolicy: Never backoffLimit: 4 Additional resources OpenTelemetry Collector documentation Deployment examples on GitHub 5.2. Sending traces and metrics to the OpenTelemetry Collector Sending traces and metrics to the OpenTelemetry Collector is possible with or without sidecar injection. 5.2.1. Sending traces and metrics to the OpenTelemetry Collector with sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance with sidecar injection. The Red Hat build of OpenTelemetry Operator allows sidecar injection into deployment workloads and automatic configuration of your instrumentation to send telemetry data to the OpenTelemetry Collector. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector as a sidecar. apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: | serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: http: processors: batch: memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: "tempo-<example>-gateway:8090" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Create your deployment using the otel-collector-sidecar service account. Add the sidecar.opentelemetry.io/inject: "true" annotation to your Deployment object. This will inject all the needed environment variables to send data from your workloads to the OpenTelemetry Collector instance. 5.2.2. Sending traces and metrics to the OpenTelemetry Collector without sidecar injection You can set up sending telemetry data to an OpenTelemetry Collector instance without sidecar injection, which involves manually setting several environment variables. Prerequisites The Red Hat OpenShift distributed tracing platform (Tempo) is installed, and a TempoStack instance is deployed. You have access to the cluster through the web console or the OpenShift CLI ( oc ): You are logged in to the web console as a cluster administrator with the cluster-admin role. An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must have an account with the dedicated-admin role. Procedure Create a project for an OpenTelemetry Collector instance. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability Grant the permissions to the service account for the k8sattributes and resourcedetection processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector instance with the OpenTelemetryCollector custom resource. apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: | receivers: jaeger: protocols: grpc: thrift_binary: thrift_compact: thrift_http: opencensus: otlp: protocols: grpc: http: zipkin: processors: batch: k8sattributes: memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-<example>-distributor:4317" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] 1 This points to the Gateway of the TempoStack instance deployed by using the <example> Tempo Operator. Set the environment variables in the container with your instrumented application. Name Description Default value Sets the value of the service.name resource attribute. "" Base endpoint URL for any signal type with an optionally specified port number. https://localhost:4317 Path to the certificate file for the TLS credentials of the gRPC client. https://localhost:4317 Sampler to be used for traces. parentbased_always_on Transport protocol for the OTLP exporter. grpc Maximum time interval for the OTLP exporter to wait for each batch export. 10s Disables client transport security for gRPC requests. An HTTPS schema overrides it. False | [
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel spec: mode: deployment serviceAccount: otel-collector-deployment config: | receivers: jaeger: protocols: grpc: thrift_binary: thrift_compact: thrift_http: opencensus: otlp: protocols: grpc: http: zipkin: processors: batch: k8sattributes: memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-simplest-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] 2 processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"apiVersion: batch/v1 kind: Job metadata: name: tracegen spec: template: spec: containers: - name: tracegen image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/tracegen:latest command: - \"./tracegen\" args: - -otlp-endpoint=otel-collector:4317 - -otlp-insecure - -duration=30s - -workers=1 restartPolicy: Never backoffLimit: 4",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-sidecar namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: serviceAccount: otel-collector-sidecar mode: sidecar config: | serviceAccount: otel-collector-sidecar receivers: otlp: protocols: grpc: http: processors: batch: memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]",
"apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability",
"apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io",
"apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: | receivers: jaeger: protocols: grpc: thrift_binary: thrift_compact: thrift_http: opencensus: otlp: protocols: grpc: http: zipkin: processors: batch: k8sattributes: memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-<example>-distributor:4317\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger, opencensus, otlp, zipkin] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]",
"OTEL_SERVICE_NAME",
"OTEL_EXPORTER_OTLP_ENDPOINT",
"OTEL_EXPORTER_OTLP_CERTIFICATE",
"OTEL_TRACES_SAMPLER",
"OTEL_EXPORTER_OTLP_PROTOCOL",
"OTEL_EXPORTER_OTLP_TIMEOUT",
"OTEL_EXPORTER_OTLP_INSECURE"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/red_hat_build_of_opentelemetry/otel-temp |
Chapter 22. Performance tuning considerations with KIE Server | Chapter 22. Performance tuning considerations with KIE Server The following key concepts or suggested practices can help you optimize KIE Server performance. These concepts are summarized in this section as a convenience and are explained in more detail in the cross-referenced documentation, where applicable. This section will expand or change as needed with new releases of Red Hat Process Automation Manager. Ensure that development mode is enabled during development You can set KIE Server or specific projects in Business Central to use production mode or development mode. By default, KIE Server and all new projects in Business Central are in development mode. This mode provides features that facilitate your development experience, such as flexible project deployment policies, and features that optimize KIE Server performance during development, such as disabled duplicate GAV detection. Use development mode until your Red Hat Process Automation Manager environment is established and completely ready for production mode. For more information about configuring the environment mode or duplicate GAV detection, see the following resources: Chapter 8, Configuring the environment mode in KIE Server and Business Central Packaging and deploying an Red Hat Process Automation Manager project Adapt KIE Server capabilities and extensions to your specific needs The capabilities in KIE Server are determined by plug-in extensions that you can enable, disable, or further extend to meet your business needs. By default, KIE Server extensions are exposed through REST or JMS data transports and use predefined client APIs. You can extend existing KIE Server capabilities with additional REST endpoints, extend supported transport methods beyond REST or JMS, or extend functionality in the KIE Server client. This flexibility in KIE Server functionality enables you to adapt your KIE Server instances to your business needs, instead of adapting your business needs to the default KIE Server capabilities. For information about enabling, disabling, or extending KIE Server capabilities, see Chapter 21, KIE Server capabilities and extensions . | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/performance-tuning-kie-server-ref_execution-server |
probe::ioblock_trace.request | probe::ioblock_trace.request Name probe::ioblock_trace.request - Fires just as a generic block I/O request is created for a bio. Synopsis Values None Description name - name of the probe point q - request queue on which this bio was queued. devname - block device name ino - i-node number of the mapped file bytes_done - number of bytes transferred sector - beginning sector for the entire bio flags - see below BIO_UPTODATE 0 ok after I/O completion BIO_RW_BLOCK 1 RW_AHEAD set, and read/write would block BIO_EOF 2 out-out-bounds error BIO_SEG_VALID 3 nr_hw_seg valid BIO_CLONED 4 doesn't own data BIO_BOUNCED 5 bio is a bounce bio BIO_USER_MAPPED 6 contains user pages BIO_EOPNOTSUPP 7 not supported rw - binary trace for read/write request vcnt - bio vector count which represents number of array element (page, offset, length) which make up this I/O request idx - offset into the bio vector array phys_segments - number of segments in this bio after physical address coalescing is performed. size - total size in bytes bdev - target block device bdev_contains - points to the device object which contains the partition (when bio structure represents a partition) p_start_sect - points to the start sector of the partition structure of the device Context The process makes block I/O request | [
"ioblock_trace.request"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-ioblock-trace-request |
Chapter 16. Red Hat Network | Chapter 16. Red Hat Network Red Hat Network is an Internet solution for managing one or more Red Hat Enterprise Linux systems. All Security Alerts, Bug Fix Alerts, and Enhancement Alerts (collectively known as Errata Alerts) can be downloaded directly from Red Hat using the Package Updater standalone application or through the RHN website available at https://rhn.redhat.com/ . Figure 16.1. Your RHN Red Hat Network saves you time because you receive email when updated packages are released. You do not have to search the Web for updated packages or security alerts. By default, Red Hat Network installs the packages as well. You do not have to learn how to use RPM or worry about resolving software package dependencies; RHN does it all. Red Hat Network features include: Errata Alerts - learn when Security Alerts, Bug Fix Alerts, and Enhancement Alerts are issued for all the systems in your network Figure 16.2. Relevant Errata Automatic email notifications - Receive an email notification when an Errata Alert is issued for your system(s) Scheduled Errata Updates - Schedule delivery of Errata Updates Package installation - Schedule package installation on one or more systems with the click of a button Package Updater - Use the Package Updater to download the latest software packages for your system (with optional package installation) Red Hat Network website - Manage multiple systems, downloaded individual packages, and schedule actions such as Errata Updates through a secure Web browser connection from any computer Warning You must activate your Red Hat Enterprise Linux product before registering your system with Red Hat Network to make sure your system is entitled to the correct services. To activate your product, go to: After activating your product, register it with Red Hat Network to receive Errata Updates. The registration process gathers information about the system that is required to notify you of updates. For example, a list of packages installed on the system is compiled so you are only notified about updates that are relevant to your system. The first time the system is booted, the Software Update Setup Assistant prompts you to register. If you did not register then, select Applications (the main menu on the panel) => System Tools => Package Updater on your desktop to start the registration process. Alternately, execute the command yum update from a shell prompt. Figure 16.3. Registering with RHN After registering, use one of the following methods to start receiving updates: Select Applications (the main menu on the panel) => System Tools => Package Updater on your desktop Execute the command yum from a shell prompt Use the RHN website at https://rhn.redhat.com/ Click on the package icon when it appears in the panel to launch the Package Updater . For more detailed instructions, refer to the documentation available at: Note Red Hat Enterprise Linux includes a convenient panel icon that displays visible alerts when there is an update for your Red Hat Enterprise Linux system. This panel icon is not present if no updates are available. | [
"http://www.redhat.com/apps/activate/",
"http://www.redhat.com/docs/manuals/RHNetwork/"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/ch-rhnetwork |
5.5.6. Adding or Deleting a GULM Lock Server Member | 5.5.6. Adding or Deleting a GULM Lock Server Member The procedure for adding or deleting a GULM cluster member depends on the type of GULM node: either a node that functions only as a GULM client (a cluster member capable of running applications, but not eligible to function as a GULM lock server) or a node that functions as a GULM lock server. The procedure in this section describes how to add or delete a member that functions as a GULM lock server. To add a member that functions only as a GULM client, refer to Section 5.5.4, "Adding a GULM Client-only Member" ; to delete a member that functions only as a GULM client, refer to Section 5.5.5, "Deleting a GULM Client-only Member" . Important The number of nodes that can be configured as GULM lock servers is limited to either one, three, or five. To add or delete a GULM member that functions as a GULM lock server in an existing cluster that is currently in operation, follow these steps: At one of the running members (running on a node that is not to be deleted), start system-config-cluster (refer to Section 5.2, "Starting the Cluster Configuration Tool " ). At the Cluster Status Tool tab, disable each service listed under Services . Stop the cluster software on each running node by running the following commands at each node in this order: service rgmanager stop , if the cluster is running high-availability services ( rgmanager ) service gfs stop , if you are using Red Hat GFS service clvmd stop , if CLVM has been used to create clustered volumes service lock_gulmd stop service ccsd stop To add a a GULM lock server member, at system-config-cluster , in the Cluster Configuration Tool tab, add each node and configure fencing for it as in Section 5.5.1, "Adding a Member to a New Cluster" . Make sure to select GULM Lockserver in the Node Properties dialog box (refer to Figure 5.6, "Adding a Member to a New GULM Cluster" ). To delete a GULM lock server member, at system-config-cluster (running on a node that is not to be deleted), in the Cluster Configuration Tool tab, delete each member as follows: If necessary, click the triangle icon to expand the Cluster Nodes property. Select the cluster node to be deleted. At the bottom of the right frame (labeled Properties ), click the Delete Node button. Clicking the Delete Node button causes a warning dialog box to be displayed requesting confirmation of the deletion ( Figure 5.9, "Confirm Deleting a Member" ). Figure 5.9. Confirm Deleting a Member At that dialog box, click Yes to confirm deletion. Propagate the configuration file to the cluster nodes as follows: Log in to the node where you created the configuration file (the same node used for running system-config-cluster ). Using the scp command, copy the /etc/cluster/cluster.conf file to all nodes in the cluster. Note Propagating the cluster configuration file this way is necessary under these circumstances because the cluster software is not running, and therefore not capable of propagating the configuration. Once a cluster is installed and running, the cluster configuration file is propagated using the Red Hat cluster management GUI Send to Cluster button. For more information about propagating the cluster configuration using the GUI Send to Cluster button, refer to Section 6.3, "Modifying the Cluster Configuration" . After you have propagated the cluster configuration to the cluster nodes you can either reboot each node or start the cluster software on each cluster node by running the following commands at each node in this order: service ccsd start service lock_gulmd start service clvmd start , if CLVM has been used to create clustered volumes service gfs start , if you are using Red Hat GFS service rgmanager start , if the node is also functioning as a GULM client and the cluster is running cluster services ( rgmanager ) At system-config-cluster (running on a node that was not deleted), in the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected. Note Make sure to configure other parameters that may be affected by changes in this section. Refer to Section 5.1, "Configuration Tasks" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-add-del-member-running-gulm-lockserver-ca |
Chapter 7. Cluster Network Operator in OpenShift Container Platform | Chapter 7. Cluster Network Operator in OpenShift Container Platform You can use the Cluster Network Operator (CNO) to deploy and manage cluster network components on an OpenShift Container Platform cluster, including the Container Network Interface (CNI) network plugin selected for the cluster during installation. 7.1. Cluster Network Operator The Cluster Network Operator implements the network API from the operator.openshift.io API group. The Operator deploys the OVN-Kubernetes network plugin, or the network provider plugin that you selected during cluster installation, by using a daemon set. Procedure The Cluster Network Operator is deployed during installation as a Kubernetes Deployment . Run the following command to view the Deployment status: USD oc get -n openshift-network-operator deployment/network-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE network-operator 1/1 1 1 56m Run the following command to view the state of the Cluster Network Operator: USD oc get clusteroperator/network Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.5.4 True False False 50m The following fields provide information about the status of the operator: AVAILABLE , PROGRESSING , and DEGRADED . The AVAILABLE field is True when the Cluster Network Operator reports an available status condition. 7.2. Viewing the cluster network configuration Every new OpenShift Container Platform installation has a network.config object named cluster . Procedure Use the oc describe command to view the cluster network configuration: USD oc describe network.config/cluster Example output Name: cluster Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: Network Metadata: Self Link: /apis/config.openshift.io/v1/networks/cluster Spec: 1 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Status: 2 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cluster Network MTU: 8951 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Events: <none> 1 The Spec field displays the configured state of the cluster network. 2 The Status field displays the current state of the cluster network configuration. 7.3. Viewing Cluster Network Operator status You can inspect the status and view the details of the Cluster Network Operator using the oc describe command. Procedure Run the following command to view the status of the Cluster Network Operator: USD oc describe clusteroperators/network 7.4. Enabling IP forwarding globally From OpenShift Container Platform 4.14 onward, global IP address forwarding is disabled on OVN-Kubernetes based cluster deployments to prevent undesirable effects for cluster administrators with nodes acting as routers. However, in some cases where an administrator expects traffic to be forwarded a new configuration parameter ipForwarding is available to allow forwarding of all IP traffic. To re-enable IP forwarding for all traffic on OVN-Kubernetes managed interfaces set the gatewayConfig.ipForwarding specification in the Cluster Network Operator to Global following this procedure: Procedure Backup the existing network configuration by running the following command: USD oc get network.operator cluster -o yaml > network-config-backup.yaml Run the following command to modify the existing network configuration: USD oc edit network.operator cluster Add or update the following block under spec as illustrated in the following example: spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes clusterNetworkMTU: 8900 defaultNetwork: ovnKubernetesConfig: gatewayConfig: ipForwarding: Global Save and close the file. After applying the changes, the OpenShift Cluster Network Operator (CNO) applies the update across the cluster. You can monitor the progress by using the following command: USD oc get clusteroperators network The status should eventually report as Available , Progressing=False , and Degraded=False . Alternatively, you can enable IP forwarding globally by running the following command: USD oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}} Note The other valid option for this parameter is Restricted in case you want to revert this change. Restricted is the default and with that setting global IP address forwarding is disabled. 7.5. Viewing Cluster Network Operator logs You can view Cluster Network Operator logs by using the oc logs command. Procedure Run the following command to view the logs of the Cluster Network Operator: USD oc logs --namespace=openshift-network-operator deployment/network-operator 7.6. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. Note After cluster installation, you can only modify the clusterNetwork IP address range. The default network type can only be changed from OpenShift SDN to OVN-Kubernetes through migration. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 7.6.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 7.1. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 This value is ready-only and inherited from the Network.config.openshift.io object named cluster during cluster installation. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 7.2. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 7.3. openshiftSDNConfig object Field Type Description mode string The network isolation mode for OpenShift SDN. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This value is normally configured automatically. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 7.4. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This value is normally configured automatically. genevePort integer The UDP port for the Geneve overlay network. ipsecConfig object An object describing the IPsec mode for the cluster. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 7.5. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 7.6. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 7.7. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 7.8. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 7.9. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 7.10. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 7.11. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Note You can only change the configuration for your cluster network plugin during cluster installation, except for the gatewayConfig field that can be changed at runtime as a postinstallation activity. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 7.12. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 7.6.2. Cluster Network Operator example configuration A complete CNO configuration is specified in the following example: Example Cluster Network Operator object apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes clusterNetworkMTU: 8900 7.7. Additional resources Network API in the operator.openshift.io API group Modifying the clusterNetwork IP address range Migrating from the OpenShift SDN network plugin | [
"oc get -n openshift-network-operator deployment/network-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE network-operator 1/1 1 1 56m",
"oc get clusteroperator/network",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.5.4 True False False 50m",
"oc describe network.config/cluster",
"Name: cluster Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: Network Metadata: Self Link: /apis/config.openshift.io/v1/networks/cluster Spec: 1 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Status: 2 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cluster Network MTU: 8951 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Events: <none>",
"oc describe clusteroperators/network",
"oc get network.operator cluster -o yaml > network-config-backup.yaml",
"oc edit network.operator cluster",
"spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes clusterNetworkMTU: 8900 defaultNetwork: ovnKubernetesConfig: gatewayConfig: ipForwarding: Global",
"oc get clusteroperators network",
"oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipForwarding\": \"Global\"}}}}}",
"oc logs --namespace=openshift-network-operator deployment/network-operator",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes clusterNetworkMTU: 8900"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/cluster-network-operator |
Chapter 19. Recording and analyzing performance profiles with perf | Chapter 19. Recording and analyzing performance profiles with perf The perf tool allows you to record performance data and analyze it at a later time. Prerequisites You have the perf user space tool installed as described in Installing perf . 19.1. The purpose of perf record The perf record command samples performance data and stores it in a file, perf.data , which can be read and visualized with other perf commands. perf.data is generated in the current directory and can be accessed at a later time, possibly on a different machine. If you do not specify a command for perf record to record during, it will record until you manually stop the process by pressing Ctrl+C . You can attach perf record to specific processes by passing the -p option followed by one or more process IDs. You can run perf record without root access, however, doing so will only sample performance data in the user space. In the default mode, perf record uses CPU cycles as the sampling event and operates in per-thread mode with inherit mode enabled. 19.2. Recording a performance profile without root access You can use perf record without root access to sample and record performance data in the user-space only. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Sample and record the performance data: Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl + C . Additional resources perf-record(1) man page on your system 19.3. Recording a performance profile with root access You can use perf record with root access to sample and record performance data in both the user-space and the kernel-space simultaneously. Prerequisites You have the perf user space tool installed as described in Installing perf . You have root access. Procedure Sample and record the performance data: Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl + C . Additional resources perf-record(1) man page on your system 19.4. Recording a performance profile in per-CPU mode You can use perf record in per-CPU mode to sample and record performance data in both and user-space and the kernel-space simultaneously across all threads on a monitored CPU. By default, per-CPU mode monitors all online CPUs. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Sample and record the performance data: Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl + C . Additional resources perf-record(1) man page on your system 19.5. Capturing call graph data with perf record You can configure the perf record tool so that it records which function is calling other functions in the performance profile. This helps to identify a bottleneck if several processes are calling the same function. Prerequisites You have the perf user space tool installed as described in Installing perf . Procedure Sample and record performance data with the --call-graph option: Replace command with the command you want to sample data during. If you do not specify a command, then perf record will sample data until you manually stop it by pressing Ctrl + C . Replace method with one of the following unwinding methods: fp Uses the frame pointer method. Depending on compiler optimization, such as with binaries built with the GCC option --fomit-frame-pointer , this may not be able to unwind the stack. dwarf Uses DWARF Call Frame Information to unwind the stack. lbr Uses the last branch record hardware on Intel processors. Additional resources perf-record(1) man page on your system 19.6. Analyzing perf.data with perf report You can use perf report to display and analyze a perf.data file. Prerequisites You have the perf user space tool installed as described in Installing perf . There is a perf.data file in the current directory. If the perf.data file was created with root access, you need to run perf report with root access too. Procedure Display the contents of the perf.data file for further analysis: This command displays output similar to the following: Additional resources perf-report(1) man page on your system 19.7. Interpretation of perf report output The table displayed by running the perf report command sorts the data into several columns: The 'Overhead' column Indicates what percentage of overall samples were collected in that particular function. The 'Command' column Tells you which process the samples were collected from. The 'Shared Object' column Displays the name of the ELF image where the samples come from (the name [kernel.kallsyms] is used when the samples come from the kernel). The 'Symbol' column Displays the function name or symbol. In default mode, the functions are sorted in descending order with those with the highest overhead displayed first. 19.8. Generating a perf.data file that is readable on a different device You can use the perf tool to record performance data into a perf.data file to be analyzed on a different device. Prerequisites You have the perf user space tool installed as described in Installing perf . The kernel debuginfo package is installed. For more information, see Getting debuginfo packages for an application or library using GDB. Procedure Capture performance data you are interested in investigating further: This example would generate a perf.data over the entire system for a period of seconds seconds as dictated by the use of the sleep command. It would also capture call graph data using the frame pointer method. Generate an archive file containing debug symbols of the recorded data: Verification Verify that the archive file has been generated in your current active directory: The output will display every file in your current directory that begins with perf.data . The archive file will be named either: or Additional resources Recording and analyzing performance profiles with perf Capturing call graph data with perf record 19.9. Analyzing a perf.data file that was created on a different device You can use the perf tool to analyze a perf.data file that was generated on a different device. Prerequisites You have the perf user space tool installed as described in Installing perf . A perf.data file and associated archive file generated on a different device are present on the current device being used. Procedure Copy both the perf.data file and the archive file into your current active directory. Extract the archive file into ~/.debug : Note The archive file might also be named perf.data.tar.gz . Open the perf.data file for further analysis: 19.10. Why perf displays some function names as raw function addresses For kernel functions, perf uses the information from the /proc/kallsyms file to map the samples to their respective function names or symbols. For functions executed in the user space, however, you might see raw function addresses because the binary is stripped. The debuginfo package of the executable must be installed or, if the executable is a locally developed application, the application must be compiled with debugging information turned on (the -g option in GCC) to display the function names or symbols in such a situation. Note It is not necessary to re-run the perf record command after installing the debuginfo associated with an executable. Simply re-run the perf report command. Additional Resources Enabling debugging with debugging information 19.11. Enabling debug and source repositories A standard installation of Red Hat Enterprise Linux does not enable the debug and source repositories. These repositories contain information needed to debug the system components and measure their performance. Procedure Enable the source and debug information package channels: The USD(uname -i) part is automatically replaced with a matching value for architecture of your system: Architecture name Value 64-bit Intel and AMD x86_64 64-bit ARM aarch64 IBM POWER ppc64le 64-bit IBM Z s390x 19.12. Getting debuginfo packages for an application or library using GDB Debugging information is required to debug code. For code that is installed from a package, the GNU Debugger (GDB) automatically recognizes missing debug information, resolves the package name and provides concrete advice on how to get the package. Prerequisites The application or library you want to debug must be installed on the system. GDB and the debuginfo-install tool must be installed on the system. For details, see Setting up to debug applications . Repositories providing debuginfo and debugsource packages must be configured and enabled on the system. For details, see Enabling debug and source repositories . Procedure Start GDB attached to the application or library you want to debug. GDB automatically recognizes missing debugging information and suggests a command to run. Exit GDB: type q and confirm with Enter . Run the command suggested by GDB to install the required debuginfo packages: The dnf package management tool provides a summary of the changes, asks for confirmation and once you confirm, downloads and installs all the necessary files. In case GDB is not able to suggest the debuginfo package, follow the procedure described in Getting debuginfo packages for an application or library manually . Additional resources How can I download or install debuginfo packages for RHEL systems? (Red Hat Knowledgebase) | [
"perf record command",
"perf record command",
"perf record -a command",
"perf record --call-graph method command",
"perf report",
"Samples: 2K of event 'cycles', Event count (approx.): 235462960 Overhead Command Shared Object Symbol 2.36% kswapd0 [kernel.kallsyms] [k] page_vma_mapped_walk 2.13% sssd_kcm libc-2.28.so [.] memset_avx2_erms 2.13% perf [kernel.kallsyms] [k] smp_call_function_single 1.53% gnome-shell libc-2.28.so [.] strcmp_avx2 1.17% gnome-shell libglib-2.0.so.0.5600.4 [.] g_hash_table_lookup 0.93% Xorg libc-2.28.so [.] memmove_avx_unaligned_erms 0.89% gnome-shell libgobject-2.0.so.0.5600.4 [.] g_object_unref 0.87% kswapd0 [kernel.kallsyms] [k] page_referenced_one 0.86% gnome-shell libc-2.28.so [.] memmove_avx_unaligned_erms 0.83% Xorg [kernel.kallsyms] [k] alloc_vmap_area 0.63% gnome-shell libglib-2.0.so.0.5600.4 [.] g_slice_alloc 0.53% gnome-shell libgirepository-1.0.so.1.0.0 [.] g_base_info_unref 0.53% gnome-shell ld-2.28.so [.] _dl_find_dso_for_object 0.49% kswapd0 [kernel.kallsyms] [k] vma_interval_tree_iter_next 0.48% gnome-shell libpthread-2.28.so [.] pthread_getspecific 0.47% gnome-shell libgirepository-1.0.so.1.0.0 [.] 0x0000000000013b1d 0.45% gnome-shell libglib-2.0.so.0.5600.4 [.] g_slice_free1 0.45% gnome-shell libgobject-2.0.so.0.5600.4 [.] g_type_check_instance_is_fundamentally_a 0.44% gnome-shell libc-2.28.so [.] malloc 0.41% swapper [kernel.kallsyms] [k] apic_timer_interrupt 0.40% gnome-shell ld-2.28.so [.] _dl_lookup_symbol_x 0.39% kswapd0 [kernel.kallsyms] [k] raw_callee_save___pv_queued_spin_unlock",
"perf record -a --call-graph fp sleep seconds",
"perf archive",
"ls perf.data*",
"perf.data.tar.gz",
"perf.data.tar.bz2",
"mkdir -p ~/.debug tar xf perf.data.tar.bz2 -C ~/.debug",
"perf report",
"subscription-manager repos --enable rhel-9-for-USD(uname -i)-baseos-debug-rpms subscription-manager repos --enable rhel-9-for-USD(uname -i)-baseos-source-rpms subscription-manager repos --enable rhel-9-for-USD(uname -i)-appstream-debug-rpms subscription-manager repos --enable rhel-9-for-USD(uname -i)-appstream-source-rpms",
"gdb -q /bin/ls Reading symbols from /bin/ls...Reading symbols from .gnu_debugdata for /usr/bin/ls...(no debugging symbols found)...done. (no debugging symbols found)...done. Missing separate debuginfos, use: dnf debuginfo-install coreutils-8.30-6.el8.x86_64 (gdb)",
"(gdb) q",
"dnf debuginfo-install coreutils-8.30-6.el8.x86_64"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/recording-and-analyzing-performance-profiles-with-perf_monitoring-and-managing-system-status-and-performance |
Providing feedback on Red Hat build of Quarkus documentation | Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/security_overview/proc_providing-feedback-on-red-hat-documentation_security-overview |
Preface | Preface Red Hat OpenStack Platform (RHOSP) provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS) cloud on top of Red Hat Enterprise Linux. It offers a scalable, fault-tolerant platform for the development of cloud-enabled workloads. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/product_guide/pr01 |
1.5.2. Verifying Signed Packages | 1.5.2. Verifying Signed Packages All Red Hat Enterprise Linux packages are signed with the Red Hat GPG key. GPG stands for GNU Privacy Guard, or GnuPG, a free software package used for ensuring the authenticity of distributed files. For example, a private key (secret key) locks the package while the public key unlocks and verifies the package. If the public key distributed by Red Hat Enterprise Linux does not match the private key during RPM verification, the package may have been altered and therefore cannot be trusted. The RPM utility within Red Hat Enterprise Linux 6 automatically tries to verify the GPG signature of an RPM package before installing it. If the Red Hat GPG key is not installed, install it from a secure, static location, such as a Red Hat installation CD-ROM or DVD. Assuming the disc is mounted in /mnt/cdrom , use the following command as the root user to import it into the keyring (a database of trusted keys on the system): Now, the Red Hat GPG key is located in the /etc/pki/rpm-gpg/ directory. To display a list of all keys installed for RPM verification, execute the following command: To display details about a specific key, use the rpm -qi command followed by the output from the command, as in this example: It is extremely important to verify the signature of the RPM files before installing them to ensure that they have not been altered from the original source of the packages. To verify all the downloaded packages at once, issue the following command: For each package, if the GPG key verifies successfully, the command returns gpg OK . If it does not, make sure you are using the correct Red Hat public key, as well as verifying the source of the content. Packages that do not pass GPG verification should not be installed, as they may have been altered by a third party. After verifying the GPG key and downloading all the packages associated with the errata report, install the packages as root at a shell prompt. Alternatively, you may use the Yum utility to verify signed packages. Yum provides secure package management by enabling GPG signature verification on GPG-signed packages to be turned on for all package repositories (that is, package sources), or for individual repositories. When signature verification is enabled, Yum will refuse to install any packages not GPG-signed with the correct key for that repository. This means that you can trust that the RPM packages you download and install on your system are from a trusted source, such as Red Hat, and were not modified during transfer. In order to have automatic GPG signature verification enabled when installing or updating packages via Yum, ensure you have the following option defined under the [main] section of your /etc/yum.conf file: | [
"~]# rpm --import /mnt/cdrom/RPM-GPG-KEY",
"~]# rpm -qa gpg-pubkey* gpg-pubkey-db42a60e-37ea5438",
"~]# rpm -qi gpg-pubkey-db42a60e-37ea5438 Name : gpg-pubkey Relocations: (not relocatable) Version : 2fa658e0 Vendor: (none) Release : 45700c69 Build Date: Fri 07 Oct 2011 02:04:51 PM CEST Install Date: Fri 07 Oct 2011 02:04:51 PM CEST Build Host: localhost Group : Public Keys Source RPM: (none) [output truncated]",
"~]# rpm -K /root/updates/*.rpm alsa-lib-1.0.22-3.el6.x86_64.rpm: rsa sha1 (md5) pgp md5 OK alsa-utils-1.0.21-3.el6.x86_64.rpm: rsa sha1 (md5) pgp md5 OK aspell-0.60.6-12.el6.x86_64.rpm: rsa sha1 (md5) pgp md5 OK",
"gpgcheck=1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-updating_packages-verifying_signed_packages |
Chapter 1. Overview | Chapter 1. Overview Based on Fedora 28 and the upstream kernel 4.18, Red Hat Enterprise Linux 8.0 provides users with a stable, secure, consistent foundation across hybrid cloud deployments with the tools needed to support traditional and emerging workloads. Highlights of the release include: Distribution Content is available through the BaseOS and Application Stream ( AppStream ) repositories. The AppStream repository supports a new extension of the traditional RPM format - modules . This allows for multiple major versions of a component to be available for install. See Chapter 3, Distribution of content in RHEL 8 for more information. Software Management The YUM package manager is now based on the DNF technology and it provides support for modular content, increased performance, and a well-designed stable API for integration with tooling. See Section 5.1.4, "Software management" for more details. Shells and command-line tools RHEL 8 provides the following version control systems : Git 2.18 , Mercurial 4.8 , and Subversion 1.10 . See Section 5.1.6, "Shells and command-line tools" for details. Dynamic programming languages, web and database servers Python 3.6 is the default Python implementation in RHEL 8; limited support for Python 2.7 is provided. No version of Python is installed by default. Node.js is new in RHEL. Other dynamic programming languages have been updated since RHEL 7: PHP 7.2 , Ruby 2.5 , Perl 5.26 , SWIG 3.0 are now available. The following database servers are distributed with RHEL 8: MariaDB 10.3 , MySQL 8.0 , PostgreSQL 10 , PostgreSQL 9.6 , and Redis 5 . RHEL 8 provides the Apache HTTP Server 2.4 and introduces a new web server , nginx 1.14 . Squid has been updated to version 4.4, and a new proxy caching server is now included: Varnish Cache 6.0 . See Section 5.1.7, "Dynamic programming languages, web and database servers" for more information. Desktop GNOME Shell has been rebased to version 3.28. The GNOME session and the GNOME Display Manager use Wayland as their default display server. The X.Org server, which is the default display server in RHEL 7, is available as well. See Section 5.1.8, "Desktop" for more information. Installer and image creation The Anaconda installer can utilize LUKS2 disk encryption, and install the system on NVDIMM devices. The Image Builder tool enables users to create customized system images in a variety of formats, including images prepared for deployment on clouds of various providers. Installation from a DVD using Hardware Management Console ( HMC ) and Support Element ( SE ) on IBM Z are available in RHEL 8. See Section 5.1.2, "Installer and image creation" for further details. Kernel The extended Berkeley Packet Filtering ( eBPF) feature enables the user space to attach custom programs onto a variety of points (sockets, trace points, packet reception) to receive and process data. This feature is available as a Technology Preview . BPF Compiler Collection ( BCC ), a tool for creating efficient kernel tracing and manipulation programs, is available as a Technology Preview . See Section 5.3.1, "Kernel" for more information. File systems and storage The LUKS version 2 ( LUKS2 ) format replaces the legacy LUKS (LUKS1) format. The dm-crypt subsystem and the cryptsetup tool now uses LUKS2 as the default format for encrypted volumes. See Section 5.1.12, "File systems and storage" for more information. Security System-wide cryptographic policies , which configures the core cryptographic subsystems, covering the TLS, IPsec, SSH, DNSSEC, and Kerberos protocols, are applied by default. With the new update-crypto-policies command, the administrator can easily switch between modes: default, legacy, future, and fips. Support for smart cards and Hardware Security Modules ( HSM ) with PKCS #11 is now consistent across the system. See Section 5.1.15, "Security" for more information. Networking The nftables framework replaces iptables in the role of the default network packet filtering facility. The firewalld daemon now uses nftables as its default backend. Support for IPVLAN virtual network drivers that enable the network connectivity for multiple containers has been introduced. The eXpress Data Path ( XDP ), XDP for Traffic Control ( tc ), and Address Family eXpress Data Path ( AF_XDP ), as parts of the extended Berkeley Packet Filtering ( eBPF) feature, are available as Technology Previews . For more details, see Section 5.3.7, "Networking" in Technology Previews. See Section 5.1.14, "Networking" in New features for additional features. Virtualization A more modern PCI Express-based machine type ( Q35 ) is now supported and automatically configured in virtual machines created in RHEL 8. This provides a variety of improvements in features and compatibility of virtual devices. Virtual machines can now be created and managed using the RHEL 8 web console, also known as Cockpit . The QEMU emulator introduces the sandboxing feature, which provides configurable limitations to what systems calls QEMU can perform, and thus makes virtual machines more secure. See Section 5.1.16, "Virtualization" for more information. Compilers and development tools The GCC compiler based on version 8.2 brings support for more recent C++ language standard versions, better optimizations, new code hardening techniques, improved warnings, and new hardware features. Various tools for code generation, manipulation, and debugging can now experimentally handle the DWARF5 debugging information format. Kernel support for eBPF tracing is available for some tools, such as BCC , PCP , and SystemTap . The glibc libraries based on version 2.28 add support for Unicode 11, newer Linux system calls, key improvements in the DNS stub resolver, additional security hardening, and improved performance. RHEL 8 provides OpenJDK 11, OpenJDK 8, IcedTea-Web, and various Java tools, such as Ant , Maven , or Scala . See Section 5.1.11, "Compilers and development tools" for additional details. High availability and clusters The Pacemaker cluster resource manager has been upgraded to upstream version 2.0.0, which provides a number of bug fixes and enhancements. In RHEL 8, the pcs configuration system fully supports Corosync 3, knet , and node names. See Section 5.1.13, "High availability and clusters" for more information. Additional resources Capabilities and limits of Red Hat Enterprise Linux 8 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the Red Hat Enterprise Linux life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. The Package manifest document provides a package listing for RHEL 8. Major differences between RHEL 7 and RHEL 8 are documented in Considerations in adopting RHEL 8 . Instructions on how to perform an in-place upgrade from RHEL 7 to RHEL 8 are provided by the document Upgrading from RHEL 7 to RHEL 8 . Crrently supported upgrade paths are listed in Supported in-place upgrade paths for Red Hat Enterprise Linux . The Red Hat Insights service, which enables you to proactively identify, examine, and resolve known technical issues, is now available with all RHEL subscriptions. For instructions on how to install the Red Hat Insights client and register your system to the service, see the Red Hat Insights Get Started page. Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Registration Assistant Kickstart Generator Product Life Cycle Checker Red Hat Product Certificates Red Hat Satellite Upgrade Helper Red Hat CVE Checker JVM Options Configuration Tool Load Balancer Configuration Tool Red Hat Code Browser Yum Repository Configuration Helper Red Hat Out of Memory Analyzer | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.0_release_notes/overview |
3.7. Translator Result Caching | 3.7. Translator Result Caching Translators can contribute cache entries into the result set cache by using the CacheDirective object. The resulting cache entries behave just as if they were created by a user query. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/translator_result_caching |
17.5. Obtaining Node Information | 17.5. Obtaining Node Information A Red Hat Gluster Storage trusted storage pool consists of nodes, volumes, and bricks. The get-state command outputs information about a node to a specified file. Using the command line interface, external applications can invoke the command on all nodes of the trusted storage pool, and parse and collate the data obtained from all these nodes to get an easy-to-use and complete picture of the state of the trusted storage pool in a machine parseable format. Executing the get-state Command The get-state command outputs information about a node to a specified file and can be invoked in different ways. The table below shows the options that can be used with the get-state command. Table 17.1. get-state Command Options Command Description gluster get-state glusterd state information is saved in the /var/run/gluster/glusterd_state_ timestamp file. gluster get-state file filename glusterd state information is saved in the /var/run/gluster/ directory with the filename as specified in the command. gluster get-state odir directory file filename glusterd state information is saved in the directory and in the file name as specified in the command. gluster get-state detail glusterd state information is saved in the /var/run/gluster/glusterd_state_ timestamp file, and all clients connected per brick are included in the output. gluster get-state volumeoptions glusterd state information is saved in the /var/run/gluster/glusterd_state_ timestamp file, and all values for all the volume options are included in the output. Interpreting the Output with Examples Invocation of the get-state command saves the information that reflects the node level status of the trusted storage pool as maintained in glusterd (no other daemons are supported as of now) to a file specified in the command. By default, the output will be dumped to /var/run/gluster/glusterd_state_ timestamp file . Invocation of the get-state command provides the following information: Table 17.2. Output Description Section Description Global Displays the UUID and the op-version of the glusterd. Global options Displays cluster specific options that have been set explicitly through the volume set command. Peers Displays the peer node information including its hostname and connection status. Volumes Displays the list of volumes created on this node along with the detailed information on each volume. Services Displays the list of the services configured on this node along with its status. Misc Displays miscellaneous information about the node. For example, configured ports. Example Output for gluster get-state : View the file using the cat state_dump_file_path command: Invocation of the gluster get-state volumeoptions lists all volume options irrespective of whether the volume option has been explicitly set or not. Example Output for gluster get-state volumeoptions : View the file using the cat state_dump_file_path command: | [
"gluster get-state [odir path_to_output_dir ] [file filename ] [detail|volumeoptions] Usage: get-state [options]",
"gluster get-state glusterd state dumped to /var/run/gluster/glusterd_state_ timestamp",
"[Global] MYUUID: 5392df4c-aeb9-4e8c-9001-58e984897bf6 op-version: 70200 [Global options] [Peers] Peer1.primary_hostname: output omitted Peer1.uuid: 19700669-dff6-4d9f-bf73-ca370c7dc462 Peer1.state: Peer in Cluster Peer1.connected: Connected Peer1.othernames: Peer2.primary_hostname: output omitted Peer2.uuid: 179d4a5d-0539-4c4e-91a4-2e5bebad25a9 Peer2.state: Peer in Cluster Peer2.connected: Connected Peer2.othernames: Peer3.primary_hostname: output omitted Peer3.uuid: 80c715a0-5b67-4e7d-8e6e-0449955d1f66 Peer3.state: Peer in Cluster Peer3.connected: Connected Peer3.othernames: Peer4.primary_hostname: output omitted Peer4.uuid: bed027c6-596f-43a1-b250-11e252a1c524 Peer4.state: Peer in Cluster Peer4.connected: Connected Peer4.othernames: Peer5.primary_hostname: output omitted Peer5.uuid: d7084399-d47c-4f36-991b-9bd2e9e52dd4 Peer5.state: Peer in Cluster Peer5.connected: Connected Peer5.othernames: [Volumes] Volume1.name: ecv6012 Volume1.id: e33fbc3e-9240-4024-975d-5f3ed8ce2540 Volume1.type: Distributed-Disperse Volume1.transport_type: tcp Volume1.status: Started Volume1.profile_enabled: 0 Volume1.brickcount: 18 Volume1.Brick1.path: output omitted :/gluster/brick1/ecv6012 Volume1.Brick1.hostname: output omitted Volume1.Brick2.path: output omitted :/gluster/brick1/ecv6012 Volume1.Brick2.hostname: output omitted Volume1.Brick3.path: output omitted :/gluster/brick1/ecv6012 Volume1.Brick3.hostname: output omitted Volume1.Brick3.port: 49152 Volume1.Brick3.rdma_port: 0 Volume1.Brick3.port_registered: 1 Volume1.Brick3.status: Started Volume1.Brick3.spacefree: 423360098304Bytes Volume1.Brick3.spacetotal: 427132190720Bytes Volume1.Brick4.path: output omitted :/gluster/brick1/ecv6012 Volume1.Brick4.hostname: output omitted Volume1.Brick5.path: output omitted :/gluster/brick1/ecv6012 Volume1.Brick5.hostname: output omitted Volume1.Brick6.path: output omitted :/gluster/brick1/ecv6012 Volume1.Brick6.hostname: output omitted Volume1.Brick7.path: output omitted :/gluster/brick2/ecv6012 Volume1.Brick7.hostname: output omitted Volume1.Brick8.path: output omitted :/gluster/brick2/ecv6012 Volume1.Brick8.hostname: output omitted Volume1.Brick9.path: output omitted :/gluster/brick2/ecv6012 Volume1.Brick9.hostname: output omitted Volume1.Brick9.port: 49153 Volume1.Brick9.rdma_port: 0 Volume1.Brick9.port_registered: 1 Volume1.Brick9.status: Started Volume1.Brick9.spacefree: 423832850432Bytes Volume1.Brick9.spacetotal: 427132190720Bytes Volume1.Brick10.path: output omitted :/gluster/brick2/ecv6012 Volume1.Brick10.hostname: output omitted Volume1.Brick11.path: output omitted :/gluster/brick2/ecv6012 Volume1.Brick11.hostname: output omitted Volume1.Brick12.path: output omitted :/gluster/brick2/ecv6012 Volume1.Brick12.hostname: output omitted Volume1.Brick13.path: output omitted :/gluster/brick3/ecv6012 Volume1.Brick13.hostname: output omitted Volume1.Brick14.path: output omitted :/gluster/brick3/ecv6012 Volume1.Brick14.hostname: output omitted Volume1.Brick15.path: output omitted :/gluster/brick3/ecv6012 Volume1.Brick15.hostname: output omitted Volume1.Brick15.port: 49154 Volume1.Brick15.rdma_port: 0 Volume1.Brick15.port_registered: 1 Volume1.Brick15.status: Started Volume1.Brick15.spacefree: 423877419008Bytes Volume1.Brick15.spacetotal: 427132190720Bytes Volume1.Brick16.path: output omitted :/gluster/brick3/ecv6012 Volume1.Brick16.hostname: output omitted Volume1.Brick17.path: output omitted :/gluster/brick3/ecv6012 Volume1.Brick17.hostname: output omitted Volume1.Brick18.path: output omitted :/gluster/brick3/ecv6012 Volume1.Brick18.hostname: output omitted Volume1.snap_count: 0 Volume1.stripe_count: 1 Volume1.replica_count: 1 Volume1.subvol_count: 3 Volume1.arbiter_count: 0 Volume1.disperse_count: 6 Volume1.redundancy_count: 2 Volume1.quorum_status: not_applicable Volume1.snapd_svc.online_status: Offline Volume1.snapd_svc.inited: true Volume1.rebalance.id: 00000000-0000-0000-0000-000000000000 Volume1.rebalance.status: not_started Volume1.rebalance.failures: 0 Volume1.rebalance.skipped: 0 Volume1.rebalance.lookedup: 0 Volume1.rebalance.files: 0 Volume1.rebalance.data: 0Bytes Volume1.time_left: 0 Volume1.gsync_count: 0 Volume1.options.server.event-threads: 8 Volume1.options.client.event-threads: 8 Volume1.options.disperse.shd-max-threads: 24 Volume1.options.transport.address-family: inet Volume1.options.storage.fips-mode-rchecksum: on Volume1.options.nfs.disable: on [Services] svc1.name: glustershd svc1.online_status: Online svc2.name: nfs svc2.online_status: Offline svc3.name: bitd svc3.online_status: Offline svc4.name: scrub svc4.online_status: Offline svc5.name: quotad svc5.online_status: Offline [Misc] Base port: 49152 Last allocated port: 49154",
"gluster get-state volumeoptions glusterd state dumped to /var/run/gluster/glusterd_state_ timestamp",
"[Volume Options] Volume1.name: ecv6012 Volume1.options.count: 374 Volume1.options.value374: (null) Volume1.options.key374: features.cloudsync-product-id Volume1.options.value373: (null) Volume1.options.key373: features.cloudsync-store-id Volume1.options.value372: off Volume1.options.key372: features.cloudsync-remote-read Volume1.options.value371: off Volume1.options.key371: features.enforce-mandatory-lock Volume1.options.value370: (null) Volume1.options.key370: features.cloudsync-storetype Volume1.options.value369: on Volume1.options.key369: ctime.noatime Volume1.options.value368: off Volume1.options.key368: features.ctime Volume1.options.value367: off Volume1.options.key367: features.cloudsync Volume1.options.value366: off Volume1.options.key366: features.sdfs Volume1.options.value365: on Volume1.options.key365: disperse.parallel-writes Volume1.options.value364: Volume1.options.key364: delay-gen.enable Volume1.options.value363: 100000 Volume1.options.key363: delay-gen.delay-duration Volume1.options.value362: 10% Volume1.options.key362: delay-gen.delay-percentage Volume1.options.value361: off Volume1.options.key361: debug.delay-gen Volume1.options.value360: INFO Volume1.options.key360: cluster.daemon-log-level Volume1.options.value359: off Volume1.options.key359: features.selinux Volume1.options.value358: 2 Volume1.options.key358: cluster.halo-min-replicas Volume1.options.value357: 99999 Volume1.options.key357: cluster.halo-max-replicas Volume1.options.value356: 5 Volume1.options.key356: cluster.halo-max-latency Volume1.options.value355: 5 Volume1.options.key355: cluster.halo-nfsd-max-latency Volume1.options.value354: 99999 Volume1.options.key354: cluster.halo-shd-max-latency Volume1.options.value353: False Volume1.options.key353: cluster.halo-enabled Volume1.options.value352: 4 Volume1.options.key352: disperse.stripe-cache Volume1.options.value351: on Volume1.options.key351: disperse.optimistic-change-log Volume1.options.value350: 250 Volume1.options.key350: cluster.max-bricks-per-process Volume1.options.value349: 100 Volume1.options.key349: glusterd.vol_count_per_thread Volume1.options.value348: disable Volume1.options.key348: cluster.brick-multiplex Volume1.options.value347: 60 Volume1.options.key347: performance.nl-cache-timeout Volume1.options.value346: 10MB Volume1.options.key346: performance.nl-cache-limit Volume1.options.value345: false Volume1.options.key345: performance.nl-cache-positive-entry Volume1.options.value344: 10MB Volume1.options.key344: performance.rda-cache-limit Volume1.options.value343: 128KB Volume1.options.key343: performance.rda-high-wmark Volume1.options.value342: 4096 Volume1.options.key342: performance.rda-low-wmark Volume1.options.value341: 131072 Volume1.options.key341: performance.rda-request-size Volume1.options.value340: off Volume1.options.key340: performance.parallel-readdir Volume1.options.value339: off Volume1.options.key339: cluster.use-compound-fops Volume1.options.value338: 1 Volume1.options.key338: disperse.self-heal-window-size Volume1.options.value337: auto Volume1.options.key337: disperse.cpu-extensions Volume1.options.value336: 1024 Volume1.options.key336: disperse.shd-wait-qlength Volume1.options.value335: 1 Volume1.options.key335: disperse.shd-max-threads Volume1.options.value334: 5 Volume1.options.key334: features.locks-notify-contention-delay Volume1.options.value333: yes Volume1.options.key333: features.locks-notify-contention Volume1.options.value332: false Volume1.options.key332: features.locks-monkey-unlocking Volume1.options.value331: 0 Volume1.options.key331: features.locks-revocation-max-blocked Volume1.options.value330: false Volume1.options.key330: features.locks-revocation-clear-all Volume1.options.value329: 0 Volume1.options.key329: features.locks-revocation-secs Volume1.options.value328: on Volume1.options.key328: cluster.granular-entry-heal Volume1.options.value327: full Volume1.options.key327: cluster.locking-scheme Volume1.options.value326: 1024 Volume1.options.key326: cluster.shd-wait-qlength Volume1.options.value325: 1 Volume1.options.key325: cluster.shd-max-threads Volume1.options.value324: gfid-hash Volume1.options.key324: disperse.read-policy Volume1.options.value323: on Volume1.options.key323: dht.force-readdirp Volume1.options.value322: 600 Volume1.options.key322: cluster.heal-timeout Volume1.options.value321: 128 Volume1.options.key321: disperse.heal-wait-qlength Volume1.options.value320: 8 Volume1.options.key320: disperse.background-heals Volume1.options.value319: 60 Volume1.options.key319: features.lease-lock-recall-timeout Volume1.options.value318: off Volume1.options.key318: features.leases Volume1.options.value317: off Volume1.options.key317: ganesha.enable Volume1.options.value316: 60 Volume1.options.key316: features.cache-invalidation-timeout Volume1.options.value315: off Volume1.options.key315: features.cache-invalidation Volume1.options.value314: 120 Volume1.options.key314: features.expiry-time Volume1.options.value313: false Volume1.options.key313: features.scrub Volume1.options.value312: biweekly Volume1.options.key312: features.scrub-freq Volume1.options.value311: lazy Volume1.options.key311: features.scrub-throttle Volume1.options.value310: 100 Volume1.options.key310: features.shard-deletion-rate Volume1.options.value309: 16384 Volume1.options.key309: features.shard-lru-limit Volume1.options.value308: 64MB Volume1.options.key308: features.shard-block-size Volume1.options.value307: off Volume1.options.key307: features.shard Volume1.options.value306: (null) Volume1.options.key306: client.bind-insecure Volume1.options.value305: no Volume1.options.key305: cluster.quorum-reads Volume1.options.value304: enable Volume1.options.key304: cluster.disperse-self-heal-daemon Volume1.options.value303: off Volume1.options.key303: locks.mandatory-locking Volume1.options.value302: off Volume1.options.key302: locks.trace Volume1.options.value301: 25000 Volume1.options.key301: features.ctr-sql-db-wal-autocheckpoint Volume1.options.value300: 12500 Volume1.options.key300: features.ctr-sql-db-cachesize Volume1.options.value299: 300 Volume1.options.key299: features.ctr_lookupheal_inode_timeout Volume1.options.value298: 300 Volume1.options.key298: features.ctr_lookupheal_link_timeout Volume1.options.value297: off Volume1.options.key297: features.ctr_link_consistency Volume1.options.value296: off Volume1.options.key296: features.ctr-record-metadata-heat Volume1.options.value295: off Volume1.options.key295: features.record-counters Volume1.options.value294: off Volume1.options.key294: features.ctr-enabled Volume1.options.value293: 604800 Volume1.options.key293: cluster.tier-cold-compact-frequency Volume1.options.value292: 604800 Volume1.options.key292: cluster.tier-hot-compact-frequency Volume1.options.value291: on Volume1.options.key291: cluster.tier-compact Volume1.options.value290: 100 Volume1.options.key290: cluster.tier-query-limit Volume1.options.value289: 10000 Volume1.options.key289: cluster.tier-max-files Volume1.options.value288: 4000 Volume1.options.key288: cluster.tier-max-mb Volume1.options.value287: 0 Volume1.options.key287: cluster.tier-max-promote-file-size Volume1.options.value286: cache Volume1.options.key286: cluster.tier-mode Volume1.options.value285: 75 Volume1.options.key285: cluster.watermark-low Volume1.options.value284: 90 Volume1.options.key284: cluster.watermark-hi Volume1.options.value283: 3600 Volume1.options.key283: cluster.tier-demote-frequency Volume1.options.value282: 120 Volume1.options.key282: cluster.tier-promote-frequency Volume1.options.value281: off Volume1.options.key281: cluster.tier-pause Volume1.options.value280: 0 Volume1.options.key280: cluster.read-freq-threshold Volume1.options.value279: 0 Volume1.options.key279: cluster.write-freq-threshold Volume1.options.value278: disable Volume1.options.key278: cluster.enable-shared-storage Volume1.options.value277: off Volume1.options.key277: features.trash-internal-op Volume1.options.value276: 5MB Volume1.options.key276: features.trash-max-filesize Volume1.options.value275: (null) Volume1.options.key275: features.trash-eliminate-path Volume1.options.value274: .trashcan Volume1.options.key274: features.trash-dir Volume1.options.value273: off Volume1.options.key273: features.trash Volume1.options.value272: 120 Volume1.options.key272: features.barrier-timeout Volume1.options.value271: disable Volume1.options.key271: features.barrier Volume1.options.value270: off Volume1.options.key270: changelog.capture-del-path Volume1.options.value269: 120 Volume1.options.key269: changelog.changelog-barrier-timeout Volume1.options.value268: 5 Volume1.options.key268: changelog.fsync-interval Volume1.options.value267: 15 Volume1.options.key267: changelog.rollover-time Volume1.options.value266: ascii Volume1.options.key266: changelog.encoding Volume1.options.value265: {{ brick.path }}/.glusterfs/changelogs Volume1.options.key265: changelog.changelog-dir Volume1.options.value264: off Volume1.options.key264: changelog.changelog Volume1.options.value263: 51 Volume1.options.key263: cluster.server-quorum-ratio Volume1.options.value262: off Volume1.options.key262: cluster.server-quorum-type Volume1.options.value261: off Volume1.options.key261: config.gfproxyd Volume1.options.value260: off Volume1.options.key260: features.ctime Volume1.options.value259: 100 Volume1.options.key259: storage.max-hardlinks Volume1.options.value258: 0777 Volume1.options.key258: storage.create-directory-mask Volume1.options.value257: 0777 Volume1.options.key257: storage.create-mask Volume1.options.value256: 0000 Volume1.options.key256: storage.force-directory-mode Volume1.options.value255: 0000 Volume1.options.key255: storage.force-create-mode Volume1.options.value254: on Volume1.options.key254: storage.fips-mode-rchecksum Volume1.options.value253: 20 Volume1.options.key253: storage.health-check-timeout Volume1.options.value252: 1 Volume1.options.key252: storage.reserve Volume1.options.value251: : Volume1.options.key251: storage.gfid2path-separator Volume1.options.value250: on Volume1.options.key250: storage.gfid2path Volume1.options.value249: off Volume1.options.key249: storage.build-pgfid Volume1.options.value248: 30 Volume1.options.key248: storage.health-check-interval Volume1.options.value247: off Volume1.options.key247: storage.node-uuid-pathinfo Volume1.options.value246: -1 Volume1.options.key246: storage.owner-gid Volume1.options.value245: -1 Volume1.options.key245: storage.owner-uid Volume1.options.value244: 0 Volume1.options.key244: storage.batch-fsync-delay-usec Volume1.options.value243: reverse-fsync Volume1.options.key243: storage.batch-fsync-mode Volume1.options.value242: off Volume1.options.key242: storage.linux-aio Volume1.options.value241: 180 Volume1.options.key241: features.auto-commit-period Volume1.options.value240: relax Volume1.options.key240: features.retention-mode Volume1.options.value239: 120 Volume1.options.key239: features.default-retention-period Volume1.options.value238: on Volume1.options.key238: features.worm-files-deletable Volume1.options.value237: off Volume1.options.key237: features.worm-file-level Volume1.options.value236: off Volume1.options.key236: features.worm Volume1.options.value235: off Volume1.options.key235: features.read-only Volume1.options.value234: (null) Volume1.options.key234: nfs.auth-cache-ttl-sec Volume1.options.value233: (null) Volume1.options.key233: nfs.auth-refresh-interval-sec Volume1.options.value232: (null) Volume1.options.key232: nfs.exports-auth-enable Volume1.options.value231: 2 Volume1.options.key231: nfs.event-threads Volume1.options.value230: on Volume1.options.key230: nfs.rdirplus Volume1.options.value229: (1 * 1048576ULL) Volume1.options.key229: nfs.readdir-size Volume1.options.value228: (1 * 1048576ULL) Volume1.options.key228: nfs.write-size Volume1.options.value227: (1 * 1048576ULL) Volume1.options.key227: nfs.read-size Volume1.options.value226: 0x20000 Volume1.options.key226: nfs.drc-size Volume1.options.value225: off Volume1.options.key225: nfs.drc Volume1.options.value224: off Volume1.options.key224: nfs.server-aux-gids Volume1.options.value223: /sbin/rpc.statd Volume1.options.key223: nfs.rpc-statd Volume1.options.value222: /var/lib/glusterd/nfs/rmtab Volume1.options.key222: nfs.mount-rmtab Volume1.options.value221: off Volume1.options.key221: nfs.mount-udp Volume1.options.value220: on Volume1.options.key220: nfs.acl Volume1.options.value219: on Volume1.options.key219: nfs.nlm Volume1.options.value218: on Volume1.options.key218: nfs.disable Volume1.options.value217: Volume1.options.key217: nfs.export-dir Volume1.options.value216: read-write Volume1.options.key216: nfs.volume-access Volume1.options.value215: off Volume1.options.key215: nfs.trusted-write Volume1.options.value214: off Volume1.options.key214: nfs.trusted-sync Volume1.options.value213: off Volume1.options.key213: nfs.ports-insecure Volume1.options.value212: none Volume1.options.key212: nfs.rpc-auth-reject Volume1.options.value211: all Volume1.options.key211: nfs.rpc-auth-allow Volume1.options.value210: on Volume1.options.key210: nfs.rpc-auth-null Volume1.options.value209: on Volume1.options.key209: nfs.rpc-auth-unix Volume1.options.value208: 2049 Volume1.options.key208: nfs.port Volume1.options.value207: 16 Volume1.options.key207: nfs.outstanding-rpc-limit Volume1.options.value206: on Volume1.options.key206: nfs.register-with-portmap Volume1.options.value205: off Volume1.options.key205: nfs.dynamic-volumes Volume1.options.value204: off Volume1.options.key204: nfs.addr-namelookup Volume1.options.value203: on Volume1.options.key203: nfs.export-volumes Volume1.options.value202: on Volume1.options.key202: nfs.export-dirs Volume1.options.value201: 15 Volume1.options.key201: nfs.mem-factor Volume1.options.value200: no Volume1.options.key200: nfs.enable-ino32 Volume1.options.value199: (null) Volume1.options.key199: debug.error-fops Volume1.options.value198: off Volume1.options.key198: debug.random-failure Volume1.options.value197: (null) Volume1.options.key197: debug.error-number Volume1.options.value196: (null) Volume1.options.key196: debug.error-failure Volume1.options.value195: off Volume1.options.key195: debug.error-gen Volume1.options.value194: (null) Volume1.options.key194: debug.include-ops Volume1.options.value193: (null) Volume1.options.key193: debug.exclude-ops Volume1.options.value192: no Volume1.options.key192: debug.log-file Volume1.options.value191: no Volume1.options.key191: debug.log-history Volume1.options.value190: off Volume1.options.key190: debug.trace Volume1.options.value189: disable Volume1.options.key189: features.bitrot Volume1.options.value188: off Volume1.options.key188: features.inode-quota Volume1.options.value187: off Volume1.options.key187: features.quota Volume1.options.value186: off Volume1.options.key186: geo-replication.ignore-pid-check Volume1.options.value185: off Volume1.options.key185: geo-replication.ignore-pid-check Volume1.options.value184: off Volume1.options.key184: geo-replication.indexing Volume1.options.value183: off Volume1.options.key183: geo-replication.indexing Volume1.options.value182: off Volume1.options.key182: features.quota-deem-statfs Volume1.options.value181: 86400 Volume1.options.key181: features.alert-time Volume1.options.value180: 5 Volume1.options.key180: features.hard-timeout Volume1.options.value179: 60 Volume1.options.key179: features.soft-timeout Volume1.options.value178: 80% Volume1.options.key178: features.default-soft-limit Volume1.options.value171: off Volume1.options.key171: features.tag-namespaces Volume1.options.value170: off Volume1.options.key170: features.show-snapshot-directory Volume1.options.value169: .snaps Volume1.options.key169: features.snapshot-directory Volume1.options.value168: off Volume1.options.key168: features.uss Volume1.options.value167: true Volume1.options.key167: performance.global-cache-invalidation Volume1.options.value166: false Volume1.options.key166: performance.cache-invalidation Volume1.options.value165: true Volume1.options.key165: performance.force-readdirp Volume1.options.value164: off Volume1.options.key164: performance.nfs.io-threads Volume1.options.value163: off Volume1.options.key163: performance.nfs.stat-prefetch Volume1.options.value162: off Volume1.options.key162: performance.nfs.quick-read Volume1.options.value161: off Volume1.options.key161: performance.nfs.io-cache Volume1.options.value160: off Volume1.options.key160: performance.nfs.read-ahead Volume1.options.value159: on Volume1.options.key159: performance.nfs.write-behind Volume1.options.value158: off Volume1.options.key158: performance.client-io-threads Volume1.options.value157: on Volume1.options.key157: performance.stat-prefetch Volume1.options.value156: off Volume1.options.key156: performance.nl-cache Volume1.options.value155: on Volume1.options.key155: performance.quick-read Volume1.options.value154: on Volume1.options.key154: performance.open-behind Volume1.options.value153: on Volume1.options.key153: performance.io-cache Volume1.options.value152: off Volume1.options.key152: performance.readdir-ahead Volume1.options.value151: on Volume1.options.key151: performance.read-ahead Volume1.options.value150: on Volume1.options.key150: performance.write-behind Volume1.options.value149: inet Volume1.options.key149: transport.address-family Volume1.options.value148: 1024 Volume1.options.key148: transport.listen-backlog Volume1.options.value147: 9 Volume1.options.key147: server.keepalive-count Volume1.options.value146: 2 Volume1.options.key146: server.keepalive-interval Volume1.options.value145: 20 Volume1.options.key145: server.keepalive-time Volume1.options.value144: 42 Volume1.options.key144: server.tcp-user-timeout Volume1.options.value143: 2 Volume1.options.key143: server.event-threads Volume1.options.value142: (null) Volume1.options.key142: server.own-thread Volume1.options.value141: 300 Volume1.options.key141: server.gid-timeout Volume1.options.value140: on Volume1.options.key140: client.send-gids Volume1.options.value139: on Volume1.options.key139: server.dynamic-auth Volume1.options.value138: off Volume1.options.key138: server.manage-gids Volume1.options.value137: * Volume1.options.key137: auth.ssl-allow Volume1.options.value136: off Volume1.options.key136: server.ssl Volume1.options.value135: 64 Volume1.options.key135: server.outstanding-rpc-limit Volume1.options.value134: /var/run/gluster Volume1.options.key134: server.statedump-path Volume1.options.value133: 65534 Volume1.options.key133: server.anongid Volume1.options.value132: 65534 Volume1.options.key132: server.anonuid Volume1.options.value131: off Volume1.options.key131: server.all-squash Volume1.options.value130: off Volume1.options.key130: server.root-squash Volume1.options.value129: on Volume1.options.key129: server.allow-insecure Volume1.options.value128: 1 Volume1.options.key128: transport.keepalive Volume1.options.value127: (null) Volume1.options.key127: auth.reject Volume1.options.value126: * Volume1.options.key126: auth.allow Volume1.options.value125: 16384 Volume1.options.key125: network.inode-lru-limit Volume1.options.value124: (null) Volume1.options.key124: network.tcp-window-size Volume1.options.value123: 9 Volume1.options.key123: client.keepalive-count Volume1.options.value122: 2 Volume1.options.key122: client.keepalive-interval Volume1.options.value121: 20 Volume1.options.key121: client.keepalive-time Volume1.options.value120: 0 Volume1.options.key120: client.tcp-user-timeout Volume1.options.value119: 2 Volume1.options.key119: client.event-threads Volume1.options.value118: disable Volume1.options.key118: network.remote-dio Volume1.options.value117: off Volume1.options.key117: client.ssl Volume1.options.value116: (null) Volume1.options.key116: network.tcp-window-size Volume1.options.value115: 42 Volume1.options.key115: network.ping-timeout Volume1.options.value114: 1800 Volume1.options.key114: network.frame-timeout Volume1.options.value113: off Volume1.options.key113: features.encryption Volume1.options.value112: false Volume1.options.key112: performance.nl-cache-pass-through Volume1.options.value111: Volume1.options.key111: performance.xattr-cache-list Volume1.options.value110: off Volume1.options.key110: performance.md-cache-statfs Volume1.options.value109: true Volume1.options.key109: performance.cache-ima-xattrs Volume1.options.value108: true Volume1.options.key108: performance.cache-capability-xattrs Volume1.options.value107: false Volume1.options.key107: performance.cache-samba-metadata Volume1.options.value106: true Volume1.options.key106: performance.cache-swift-metadata Volume1.options.value105: 1 Volume1.options.key105: performance.md-cache-timeout Volume1.options.value104: false Volume1.options.key104: performance.md-cache-pass-through Volume1.options.value103: false Volume1.options.key103: performance.readdir-ahead-pass-through Volume1.options.value102: false Volume1.options.key102: performance.read-ahead-pass-through Volume1.options.value101: 4 Volume1.options.key101: performance.read-ahead-page-count Volume1.options.value100: false Volume1.options.key100: performance.open-behind-pass-through Volume1.options.value99: yes Volume1.options.key99: performance.read-after-open Volume1.options.value98: yes Volume1.options.key98: performance.lazy-open Volume1.options.value97: on Volume1.options.key97: performance.nfs.write-behind-trickling-writes Volume1.options.value96: 128KB Volume1.options.key96: performance.aggregate-size Volume1.options.value95: on Volume1.options.key95: performance.write-behind-trickling-writes Volume1.options.value94: off Volume1.options.key94: performance.nfs.strict-write-ordering Volume1.options.value93: off Volume1.options.key93: performance.strict-write-ordering Volume1.options.value92: off Volume1.options.key92: performance.nfs.strict-o-direct Volume1.options.value91: off Volume1.options.key91: performance.strict-o-direct Volume1.options.value90: 1MB Volume1.options.key90: performance.nfs.write-behind-window-size Volume1.options.value89: off Volume1.options.key89: performance.resync-failed-syncs-after-fsync Volume1.options.value88: 1MB Volume1.options.key88: performance.write-behind-window-size Volume1.options.value87: on Volume1.options.key87: performance.nfs.flush-behind Volume1.options.value86: on Volume1.options.key86: performance.flush-behind Volume1.options.value85: false Volume1.options.key85: performance.ctime-invalidation Volume1.options.value84: false Volume1.options.key84: performance.quick-read-cache-invalidation Volume1.options.value83: 1 Volume1.options.key83: performance.qr-cache-timeout Volume1.options.value82: 128MB Volume1.options.key82: performance.cache-size Volume1.options.value81: false Volume1.options.key81: performance.io-cache-pass-through Volume1.options.value80: false Volume1.options.key80: performance.iot-pass-through Volume1.options.value79: off Volume1.options.key79: performance.iot-cleanup-disconnected-reqs Volume1.options.value78: (null) Volume1.options.key78: performance.iot-watchdog-secs Volume1.options.value77: on Volume1.options.key77: performance.enable-least-priority Volume1.options.value76: 1 Volume1.options.key76: performance.least-prio-threads Volume1.options.value75: 16 Volume1.options.key75: performance.low-prio-threads Volume1.options.value74: 16 Volume1.options.key74: performance.normal-prio-threads Volume1.options.value73: 16 Volume1.options.key73: performance.high-prio-threads Volume1.options.value72: 16 Volume1.options.key72: performance.io-thread-count Volume1.options.value71: 32MB Volume1.options.key71: performance.cache-size Volume1.options.value70: Volume1.options.key70: performance.cache-priority Volume1.options.value69: 1 Volume1.options.key69: performance.cache-refresh-timeout Volume1.options.value68: 0 Volume1.options.key68: performance.cache-min-file-size Volume1.options.value67: 0 Volume1.options.key67: performance.cache-max-file-size Volume1.options.value66: 86400 Volume1.options.key66: diagnostics.stats-dnscache-ttl-sec Volume1.options.value65: 65535 Volume1.options.key65: diagnostics.fop-sample-buf-size Volume1.options.value64: json Volume1.options.key64: diagnostics.stats-dump-format Volume1.options.value63: 0 Volume1.options.key63: diagnostics.fop-sample-interval Volume1.options.value62: 0 Volume1.options.key62: diagnostics.stats-dump-interval Volume1.options.value61: 120 Volume1.options.key61: diagnostics.client-log-flush-timeout Volume1.options.value60: 120 Volume1.options.key60: diagnostics.brick-log-flush-timeout Volume1.options.value59: 5 Volume1.options.key59: diagnostics.client-log-buf-size Volume1.options.value58: 5 Volume1.options.key58: diagnostics.brick-log-buf-size Volume1.options.value57: (null) Volume1.options.key57: diagnostics.client-log-format Volume1.options.value56: (null) Volume1.options.key56: diagnostics.brick-log-format Volume1.options.value55: (null) Volume1.options.key55: diagnostics.client-logger Volume1.options.value54: (null) Volume1.options.key54: diagnostics.brick-logger Volume1.options.value53: CRITICAL Volume1.options.key53: diagnostics.client-sys-log-level Volume1.options.value52: CRITICAL Volume1.options.key52: diagnostics.brick-sys-log-level Volume1.options.value51: INFO Volume1.options.key51: diagnostics.client-log-level Volume1.options.value50: INFO Volume1.options.key50: diagnostics.brick-log-level Volume1.options.value49: off Volume1.options.key49: diagnostics.count-fop-hits Volume1.options.value48: off Volume1.options.key48: diagnostics.dump-fd-stats Volume1.options.value47: off Volume1.options.key47: diagnostics.latency-measurement Volume1.options.value46: yes Volume1.options.key46: cluster.full-lock Volume1.options.value45: none Volume1.options.key45: cluster.favorite-child-policy Volume1.options.value44: 128 Volume1.options.key44: cluster.heal-wait-queue-length Volume1.options.value43: no Volume1.options.key43: cluster.consistent-metadata Volume1.options.value42: on Volume1.options.key42: cluster.ensure-durability Volume1.options.value41: 1 Volume1.options.key41: cluster.post-op-delay-secs Volume1.options.value40: 1KB Volume1.options.key40: cluster.self-heal-readdir-size Volume1.options.value39: true Volume1.options.key39: cluster.choose-local Volume1.options.value38: (null) Volume1.options.key38: cluster.quorum-count Volume1.options.value37: auto Volume1.options.key37: cluster.quorum-type Volume1.options.value36: 1 Volume1.options.key36: disperse.other-eager-lock-timeout Volume1.options.value35: 1 Volume1.options.key35: disperse.eager-lock-timeout Volume1.options.value34: on Volume1.options.key34: disperse.other-eager-lock Volume1.options.value33: on Volume1.options.key33: disperse.eager-lock Volume1.options.value32: on Volume1.options.key32: cluster.eager-lock Volume1.options.value31: (null) Volume1.options.key31: cluster.data-self-heal-algorithm Volume1.options.value30: on Volume1.options.key30: cluster.metadata-change-log Volume1.options.value29: on Volume1.options.key29: cluster.data-change-log Volume1.options.value28: 1 Volume1.options.key28: cluster.self-heal-window-size Volume1.options.value27: 600 Volume1.options.key27: cluster.heal-timeout Volume1.options.value26: on Volume1.options.key26: cluster.self-heal-daemon Volume1.options.value25: off Volume1.options.key25: cluster.entry-self-heal Volume1.options.value24: off Volume1.options.key24: cluster.data-self-heal Volume1.options.value23: off Volume1.options.key23: cluster.metadata-self-heal Volume1.options.value22: 8 Volume1.options.key22: cluster.background-self-heal-count Volume1.options.value21: 1 Volume1.options.key21: cluster.read-hash-mode Volume1.options.value20: -1 Volume1.options.key20: cluster.read-subvolume-index Volume1.options.value19: (null) Volume1.options.key19: cluster.read-subvolume Volume1.options.value18: on Volume1.options.key18: cluster.entry-change-log Volume1.options.value17: (null) Volume1.options.key17: cluster.switch-pattern Volume1.options.value16: on Volume1.options.key16: cluster.weighted-rebalance Volume1.options.value15: (null) Volume1.options.key15: cluster.local-volume-name Volume1.options.value14: off Volume1.options.key14: cluster.force-migration Volume1.options.value13: off Volume1.options.key13: cluster.lock-migration Volume1.options.value12: normal Volume1.options.key12: cluster.rebal-throttle Volume1.options.value11: off Volume1.options.key11: cluster.randomize-hash-range-by-gfid Volume1.options.value10: trusted.glusterfs.dht Volume1.options.key10: cluster.dht-xattr-name Volume1.options.value9: (null) Volume1.options.key9: cluster.extra-hash-regex Volume1.options.value8: (null) Volume1.options.key8: cluster.rsync-hash-regex Volume1.options.value7: off Volume1.options.key7: cluster.readdir-optimize Volume1.options.value6: (null) Volume1.options.key6: cluster.subvols-per-directory Volume1.options.value5: off Volume1.options.key5: cluster.rebalance-stats Volume1.options.value4: 5% Volume1.options.key4: cluster.min-free-inodes Volume1.options.value3: 10% Volume1.options.key3: cluster.min-free-disk Volume1.options.value2: on Volume1.options.key2: cluster.lookup-optimize Volume1.options.value1: on Volume1.options.key1: cluster.lookup-unhashed"
] | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/obtaining_node_information |
Chapter 30. Tuning scheduling policy | Chapter 30. Tuning scheduling policy In Red Hat Enterprise Linux, the smallest unit of process execution is called a thread. The system scheduler determines which processor runs a thread, and for how long the thread runs. However, because the scheduler's primary concern is to keep the system busy, it may not schedule threads optimally for application performance. For example, say an application on a NUMA system is running on Node A when a processor on Node B becomes available. To keep the processor on Node B busy, the scheduler moves one of the application's threads to Node B. However, the application thread still requires access to memory on Node A. But, this memory will take longer to access because the thread is now running on Node B and Node A memory is no longer local to the thread. Thus, it may take longer for the thread to finish running on Node B than it would have taken to wait for a processor on Node A to become available, and then to execute the thread on the original node with local memory access. 30.1. Categories of scheduling policies Performance sensitive applications often benefit from the designer or administrator determining where threads are run. The Linux scheduler implements a number of scheduling policies which determine where and for how long a thread runs. The following are the two major categories of scheduling policies: Normal policies Normal threads are used for tasks of normal priority. Realtime policies Realtime policies are used for time-sensitive tasks that must complete without interruptions. Realtime threads are not subject to time slicing. This means the thread runs until they block, exit, voluntarily yield, or are preempted by a higher priority thread. The lowest priority realtime thread is scheduled before any thread with a normal policy. For more information, see Static priority scheduling with SCHED_FIFO and Round robin priority scheduling with SCHED_RR . Additional resources sched(7) , sched_setaffinity(2) , sched_getaffinity(2) , sched_setscheduler(2) , and sched_getscheduler(2) man pages on your system 30.2. Static priority scheduling with SCHED_FIFO The SCHED_FIFO , also called static priority scheduling, is a realtime policy that defines a fixed priority for each thread. This policy allows administrators to improve event response time and reduce latency. It is recommended to not execute this policy for an extended period of time for time sensitive tasks. When SCHED_FIFO is in use, the scheduler scans the list of all the SCHED_FIFO threads in order of priority and schedules the highest priority thread that is ready to run. The priority level of a SCHED_FIFO thread can be any integer from 1 to 99 , where 99 is treated as the highest priority. Red Hat recommends starting with a lower number and increasing priority only when you identify latency issues. Warning Because realtime threads are not subject to time slicing, Red Hat does not recommend setting a priority as 99. This keeps your process at the same priority level as migration and watchdog threads; if your thread goes into a computational loop and these threads are blocked, they will not be able to run. Systems with a single processor will eventually hang in this situation. Administrators can limit SCHED_FIFO bandwidth to prevent realtime application programmers from initiating realtime tasks that monopolize the processor. The following are some of the parameters used in this policy: /proc/sys/kernel/sched_rt_period_us This parameter defines the time period, in microseconds, that is considered to be one hundred percent of the processor bandwidth. The default value is 1000000 ms , or 1 second . /proc/sys/kernel/sched_rt_runtime_us This parameter defines the time period, in microseconds, that is devoted to running real-time threads. The default value is 950000 ms , or 0.95 seconds . 30.3. Round robin priority scheduling with SCHED_RR The SCHED_RR is a round-robin variant of the SCHED_FIFO . This policy is useful when multiple threads need to run at the same priority level. Like SCHED_FIFO , SCHED_RR is a realtime policy that defines a fixed priority for each thread. The scheduler scans the list of all SCHED_RR threads in order of priority and schedules the highest priority thread that is ready to run. However, unlike SCHED_FIFO , threads that have the same priority are scheduled in a round-robin style within a certain time slice. You can set the value of this time slice in milliseconds with the sched_rr_timeslice_ms kernel parameter in the /proc/sys/kernel/sched_rr_timeslice_ms file. The lowest value is 1 millisecond . 30.4. Normal scheduling with SCHED_OTHER The SCHED_OTHER is the default scheduling policy in Red Hat Enterprise Linux 9. This policy uses the Completely Fair Scheduler (CFS) to allow fair processor access to all threads scheduled with this policy. This policy is most useful when there are a large number of threads or when data throughput is a priority, as it allows more efficient scheduling of threads over time. When this policy is in use, the scheduler creates a dynamic priority list based partly on the niceness value of each process thread. Administrators can change the niceness value of a process, but cannot change the scheduler's dynamic priority list directly. 30.5. Setting scheduler policies Check and adjust scheduler policies and priorities by using the chrt command line tool. It can start new processes with the desired properties, or change the properties of a running process. It can also be used for setting the policy at runtime. Procedure View the process ID (PID) of the active processes: Use the --pid or -p option with the ps command to view the details of the particular PID. Check the scheduling policy, PID, and priority of a particular process: Here, 468 and 476 are PID of a process. Set the scheduling policy of a process: For example, to set the process with PID 1000 to SCHED_FIFO , with a priority of 50 : For example, to set the process with PID 1000 to SCHED_OTHER , with a priority of 0 : For example, to set the process with PID 1000 to SCHED_RR , with a priority of 10 : To start a new application with a particular policy and priority, specify the name of the application: Additional resources chrt(1) man page on your system Policy Options for the chrt command Changing the priority of services during the boot process 30.6. Policy options for the chrt command Using the chrt command, you can view and set the scheduling policy of a process. The following table describes the appropriate policy options, which can be used to set the scheduling policy of a process. Table 30.1. Policy Options for the chrt Command Short option Long option Description -f --fifo Set schedule to SCHED_FIFO -o --other Set schedule to SCHED_OTHER -r --rr Set schedule to SCHED_RR 30.7. Changing the priority of services during the boot process Using the systemd service, it is possible to set up real-time priorities for services launched during the boot process. The unit configuration directives are used to change the priority of a service during the boot process. The boot process priority change is done by using the following directives in the service section: CPUSchedulingPolicy= Sets the CPU scheduling policy for executed processes. It is used to set other , fifo , and rr policies. CPUSchedulingPriority= Sets the CPU scheduling priority for executed processes. The available priority range depends on the selected CPU scheduling policy. For real-time scheduling policies, an integer between 1 (lowest priority) and 99 (highest priority) can be used. The following procedure describes how to change the priority of a service, during the boot process, using the mcelog service. Prerequisites Install the TuneD package: Enable and start the TuneD service: Procedure View the scheduling priorities of running threads: Create a supplementary mcelog service configuration directory file and insert the policy name and priority in this file: Reload the systemd scripts configuration: Restart the mcelog service: Verification Display the mcelog priority set by systemd issue: Additional resources systemd(1) and tuna(8) man pages on your system Description of the priority range 30.8. Priority map Priorities are defined in groups, with some groups dedicated to certain kernel functions. For real-time scheduling policies, an integer between 1 (lowest priority) and 99 (highest priority) can be used. The following table describes the priority range, which can be used while setting the scheduling policy of a process. Table 30.2. Description of the priority range Priority Threads Description 1 Low priority kernel threads This priority is usually reserved for the tasks that need to be just above SCHED_OTHER . 2 - 49 Available for use The range used for typical application priorities. 50 Default hard-IRQ value 51 - 98 High priority threads Use this range for threads that execute periodically and must have quick response times. Do not use this range for CPU-bound threads as you will starve interrupts. 99 Watchdogs and migration System threads that must run at the highest priority. 30.9. TuneD cpu-partitioning profile For tuning Red Hat Enterprise Linux 9 for latency-sensitive workloads, Red Hat recommends to use the cpu-partitioning TuneD profile. Prior to Red Hat Enterprise Linux 9, the low-latency Red Hat documentation described the numerous low-level steps needed to achieve low-latency tuning. In Red Hat Enterprise Linux 9, you can perform low-latency tuning more efficiently by using the cpu-partitioning TuneD profile. This profile is easily customizable according to the requirements for individual low-latency applications. The following figure is an example to demonstrate how to use the cpu-partitioning profile. This example uses the CPU and node layout. Figure 30.1. Figure cpu-partitioning You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the following configuration options: Isolated CPUs with load balancing In the cpu-partitioning figure, the blocks numbered from 4 to 23, are the default isolated CPUs. The kernel scheduler's process load balancing is enabled on these CPUs. It is designed for low-latency processes with multiple threads that need the kernel scheduler load balancing. You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the isolated_cores=cpu-list option, which lists CPUs to isolate that will use the kernel scheduler load balancing. The list of isolated CPUs is comma-separated or you can specify a range using a dash, such as 3-5 . This option is mandatory. Any CPU missing from this list is automatically considered a housekeeping CPU. Isolated CPUs without load balancing In the cpu-partitioning figure, the blocks numbered 2 and 3, are the isolated CPUs that do not provide any additional kernel scheduler process load balancing. You can configure the cpu-partitioning profile in the /etc/tuned/cpu-partitioning-variables.conf file using the no_balance_cores=cpu-list option, which lists CPUs to isolate that will not use the kernel scheduler load balancing. Specifying the no_balance_cores option is optional, however any CPUs in this list must be a subset of the CPUs listed in the isolated_cores list. Application threads using these CPUs need to be pinned individually to each CPU. Housekeeping CPUs Any CPU not isolated in the cpu-partitioning-variables.conf file is automatically considered a housekeeping CPU. On the housekeeping CPUs, all services, daemons, user processes, movable kernel threads, interrupt handlers, and kernel timers are permitted to execute. Additional resources tuned-profiles-cpu-partitioning(7) man page on your system 30.10. Using the TuneD cpu-partitioning profile for low-latency tuning This procedure describes how to tune a system for low-latency using the TuneD's cpu-partitioning profile. It uses the example of a low-latency application that can use cpu-partitioning and the CPU layout as mentioned in the cpu-partitioning figure. The application in this case uses: One dedicated reader thread that reads data from the network will be pinned to CPU 2. A large number of threads that process this network data will be pinned to CPUs 4-23. A dedicated writer thread that writes the processed data to the network will be pinned to CPU 3. Prerequisites You have installed the cpu-partitioning TuneD profile by using the dnf install tuned-profiles-cpu-partitioning command as root. Procedure Edit the /etc/tuned/cpu-partitioning-variables.conf file with the following changes: Comment the isolated_cores=USD{f:calc_isolated_cores:1} line: Add the following information for isolated CPUS: Set the cpu-partitioning TuneD profile: Reboot the system. After rebooting, the system is tuned for low-latency, according to the isolation in the cpu-partitioning figure. The application can use taskset to pin the reader and writer threads to CPUs 2 and 3, and the remaining application threads on CPUs 4-23. Verification Verify that the isolated CPUs are not reflected in the Cpus_allowed_list field: To see affinity of all processes, enter: Note TuneD cannot change the affinity of some processes, mostly kernel processes. In this example, processes with PID 4 and 9 remain unchanged. Additional resources tuned-profiles-cpu-partitioning(7) man page 30.11. Customizing the cpu-partitioning TuneD profile You can extend the TuneD profile to make additional tuning changes. For example, the cpu-partitioning profile sets the CPUs to use cstate=1 . In order to use the cpu-partitioning profile but to additionally change the CPU cstate from cstate1 to cstate0, the following procedure describes a new TuneD profile named my_profile , which inherits the cpu-partitioning profile and then sets C state-0. Procedure Create the /etc/tuned/my_profile directory: Create a tuned.conf file in this directory, and add the following content: Use the new profile: Note In the shared example, a reboot is not required. However, if the changes in the my_profile profile require a reboot to take effect, then reboot your machine. Additional resources tuned-profiles-cpu-partitioning(7) man page on your system | [
"ps",
"chrt -p 468 pid 468 's current scheduling policy: SCHED_FIFO pid 468 's current scheduling priority: 85 chrt -p 476 pid 476 's current scheduling policy: SCHED_OTHER pid 476 's current scheduling priority: 0",
"chrt -f -p 50 1000",
"chrt -o -p 0 1000",
"chrt -r -p 10 1000",
"chrt -f 36 /bin/my-app",
"dnf install tuned",
"systemctl enable --now tuned",
"tuna --show_threads thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 1 OTHER 0 0xff 3181 292 systemd 2 OTHER 0 0xff 254 0 kthreadd 3 OTHER 0 0xff 2 0 rcu_gp 4 OTHER 0 0xff 2 0 rcu_par_gp 6 OTHER 0 0 9 0 kworker/0:0H-kblockd 7 OTHER 0 0xff 1301 1 kworker/u16:0-events_unbound 8 OTHER 0 0xff 2 0 mm_percpu_wq 9 OTHER 0 0 266 0 ksoftirqd/0 [...]",
"cat << EOF > /etc/systemd/system/mcelog.service.d/priority.conf [Service] CPUSchedulingPolicy= fifo CPUSchedulingPriority= 20 EOF",
"systemctl daemon-reload",
"systemctl restart mcelog",
"tuna -t mcelog -P thread ctxt_switches pid SCHED_ rtpri affinity voluntary nonvoluntary cmd 826 FIFO 20 0,1,2,3 13 0 mcelog",
"isolated_cores=USD{f:calc_isolated_cores:1}",
"All isolated CPUs: isolated_cores=2-23 Isolated CPUs without the kernel's scheduler load balancing: no_balance_cores=2,3",
"tuned-adm profile cpu-partitioning",
"cat /proc/self/status | grep Cpu Cpus_allowed: 003 Cpus_allowed_list: 0-1",
"ps -ae -o pid= | xargs -n 1 taskset -cp pid 1's current affinity list: 0,1 pid 2's current affinity list: 0,1 pid 3's current affinity list: 0,1 pid 4's current affinity list: 0-5 pid 5's current affinity list: 0,1 pid 6's current affinity list: 0,1 pid 7's current affinity list: 0,1 pid 9's current affinity list: 0",
"mkdir /etc/tuned/ my_profile",
"vi /etc/tuned/ my_profile /tuned.conf [main] summary=Customized tuning on top of cpu-partitioning include=cpu-partitioning [cpu] force_latency=cstate.id:0|1",
"tuned-adm profile my_profile"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/monitoring_and_managing_system_status_and_performance/tuning-scheduling-policy_monitoring-and-managing-system-status-and-performance |
Chapter 65. KafkaAutoRebalanceConfiguration schema reference | Chapter 65. KafkaAutoRebalanceConfiguration schema reference Used in: CruiseControlSpec Property Property type Description mode string (one of [remove-brokers, add-brokers]) Specifies the mode for automatically rebalancing when brokers are added or removed. Supported modes are add-brokers and remove-brokers . template LocalObjectReference Reference to the KafkaRebalance custom resource to be used as the configuration template for the auto-rebalancing on scaling when running for the corresponding mode. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-KafkaAutoRebalanceConfiguration-reference |
Chapter 4. Server and rack solutions | Chapter 4. Server and rack solutions Hardware vendors have responded to the enthusiasm around Ceph by providing both optimized server-level and rack-level solution SKUs. Validated through joint testing with Red Hat, these solutions offer predictable price-to-performance ratios for Ceph deployments, with a convenient modular approach to expand Ceph storage for specific workloads. Typical rack-level solutions include: Network switching: Redundant network switching interconnects the cluster and provides access to clients. Ceph MON nodes: The Ceph monitor is a datastore for the health of the entire cluster, and contains the cluster log. A minimum of three monitor nodes are strongly recommended for a cluster quorum in production. Ceph OSD hosts: Ceph OSD hosts house the storage capacity for the cluster, with one or more OSDs running per individual storage device. OSD hosts are selected and configured differently depending on both workload optimization and the data devices installed: HDDs, SSDs, or NVMe SSDs. Red Hat Ceph Storage: Many vendors provide a capacity-based subscription for Red Hat Ceph Storage bundled with both server and rack-level solution SKUs. Note Red Hat recommends to review the Red Hat Ceph Storage:Supported Configurations article prior to committing to any server and rack solution. Contact Red Hat support for any additional assistance. IOPS-optimized solutions With the growing use of flash storage, organizations increasingly host IOPS-intensive workloads on Ceph storage clusters to let them emulate high-performance public cloud solutions with private cloud storage. These workloads commonly involve structured data from MySQL-, MariaDB-, or PostgreSQL-based applications. Typical servers include the following elements: CPU: 10 cores per NVMe SSD, assuming a 2 GHz CPU. RAM: 16 GB baseline, plus 5 GB per OSD. Networking: 10 Gigabit Ethernet (GbE) per 2 OSDs. OSD media: High-performance, high-endurance enterprise NVMe SSDs. OSDs: Two per NVMe SSD. Bluestore WAL/DB: High-performance, high-endurance enterprise NVMe SSD, co-located with OSDs. Controller: Native PCIe bus. Note For Non-NVMe SSDs, for CPU , use two cores per SSD OSD. Table 4.1. Solutions SKUs for IOPS-optimized Ceph Workloads, by cluster size. Vendor Small (250TB) Medium (1PB) Large (2PB+) SuperMicro [a] SYS-5038MR-OSD006P N/A N/A [a] See Supermicro(R) Total Solution for Ceph for details. Throughput-optimized Solutions Throughput-optimized Ceph solutions are usually centered around semi-structured or unstructured data. Large-block sequential I/O is typical. Typical server elements include: CPU: 0.5 cores per HDD, assuming a 2 GHz CPU. RAM: 16 GB baseline, plus 5 GB per OSD. Networking: 10 GbE per 12 OSDs each for client- and cluster-facing networks. OSD media: 7,200 RPM enterprise HDDs. OSDs: One per HDD. Bluestore WAL/DB: High-performance, high-endurance enterprise NVMe SSD, co-located with OSDs. Host bus adapter (HBA): Just a bunch of disks (JBOD). Several vendors provide pre-configured server and rack-level solutions for throughput-optimized Ceph workloads. Red Hat has conducted extensive testing and evaluation of servers from Supermicro and Quanta Cloud Technologies (QCT). Table 4.2. Rack-level SKUs for Ceph OSDs, MONs, and top-of-rack (TOR) switches. Vendor Small (250TB) Medium (1PB) Large (2PB+) SuperMicro [a] SRS-42E112-Ceph-03 SRS-42E136-Ceph-03 SRS-42E136-Ceph-03 Table 4.3. Individual OSD Servers Vendor Small (250TB) Medium (1PB) Large (2PB+) SuperMicro [a] SSG-6028R-OSD072P SSG-6048-OSD216P SSG-6048-OSD216P QCT [a] QxStor RCT-200 QxStor RCT-400 QxStor RCT-400 [a] See QCT: QxStor Red Hat Ceph Storage Edition for details. Table 4.4. Additional Servers Configurable for Throughput-optimized Ceph OSD Workloads. Vendor Small (250TB) Medium (1PB) Large (2PB+) Dell PowerEdge R730XD [a] DSS 7000 [b] , twin node DSS 7000, twin node Cisco UCS C240 M4 UCS C3260 [c] UCS C3260 [d] Lenovo System x3650 M5 System x3650 M5 N/A [a] See Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat Ceph Storage - A Dell Red Hat Technical White Paper for details. [b] See Dell EMC DSS 7000 Performance & Sizing Guide for Red Hat Ceph Storage for details. [c] See Red Hat Ceph Storage hardware reference architecture for details. [d] See UCS C3260 for details Cost and capacity-optimized solutions Cost- and capacity-optimized solutions typically focus on higher capacity, or longer archival scenarios. Data can be either semi-structured or unstructured. Workloads include media archives, big data analytics archives, and machine image backups. Large-block sequential I/O is typical. Solutions typically include the following elements: CPU. 0.5 cores per HDD, assuming a 2 GHz CPU. RAM. 16 GB baseline, plus 5 GB per OSD. Networking. 10 GbE per 12 OSDs (each for client- and cluster-facing networks). OSD media. 7,200 RPM enterprise HDDs. OSDs. One per HDD. Bluestore WAL/DB Co-located on the HDD. HBA. JBOD. Supermicro and QCT provide pre-configured server and rack-level solution SKUs for cost- and capacity-focused Ceph workloads. Table 4.5. Pre-configured Rack-level SKUs for Cost- and Capacity-optimized Workloads Vendor Small (250TB) Medium (1PB) Large (2PB+) SuperMicro [a] N/A SRS-42E136-Ceph-03 SRS-42E172-Ceph-03 Table 4.6. Pre-configured Server-level SKUs for Cost- and Capacity-optimized Workloads Vendor Small (250TB) Medium (1PB) Large (2PB+) SuperMicro [a] N/A SSG-6048R-OSD216P [a] SSD-6048R-OSD360P QCT N/A QxStor RCC-400 [a] QxStor RCC-400 [a] [a] See Supermicro's Total Solution for Ceph for details. Table 4.7. Additional Servers Configurable for Cost- and Capacity-optimized Workloads Vendor Small (250TB) Medium (1PB) Large (2PB+) Dell N/A DSS 7000, twin node DSS 7000, twin node Cisco N/A UCS C3260 UCS C3260 Lenovo N/A System x3650 M5 N/A Additional Resources Red Hat Ceph Storage on Samsung NVMe SSDs Red Hat Ceph Storage on the InfiniFlash All-Flash Storage System from SanDisk Deploying MySQL Databases on Red Hat Ceph Storage Intel(R) Data Center Blocks for Cloud - Red Hat OpenStack Platform with Red Hat Ceph Storage Red Hat Ceph Storage on QCT Servers Red Hat Ceph Storage on Servers with Intel Processors and SSDs | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/hardware_guide/server-and-rack-solutions_hw |
3.4. Using the Random Number Generator | 3.4. Using the Random Number Generator In order to be able to generate secure cryptographic keys that cannot be easily broken, a source of random numbers is required. Generally, the more random the numbers are, the better the chance of obtaining unique keys. Entropy for generating random numbers is usually obtained from computing environmental "noise" or using a hardware random number generator . The rngd daemon, which is a part of the rng-tools package, is capable of using both environmental noise and hardware random number generators for extracting entropy. The daemon checks whether the data supplied by the source of randomness is sufficiently random and then stores it in the kernel's random-number entropy pool. The random numbers it generates are made available through the /dev/random and /dev/urandom character devices. The difference between /dev/random and /dev/urandom is that the former is a blocking device, which means it stops supplying numbers when it determines that the amount of entropy is insufficient for generating a properly random output. Conversely, /dev/urandom is a non-blocking source, which reuses the kernel's entropy pool and is thus able to provide an unlimited supply of pseudo-random numbers, albeit with less entropy. As such, /dev/urandom should not be used for creating long-term cryptographic keys. To install the rng-tools package, issue the following command as the root user: To start the rngd daemon, execute the following command as root : To query the status of the daemon, use the following command: To start the rngd daemon with optional parameters, execute it directly. For example, to specify an alternative source of random-number input (other than /dev/hwrandom ), use the following command: The above command starts the rngd daemon with /dev/hwrng as the device from which random numbers are read. Similarly, you can use the -o (or --random-device ) option to choose the kernel device for random-number output (other than the default /dev/random ). See the rngd (8) manual page for a list of all available options. The rng-tools package also contains the rngtest utility, which can be used to check the randomness of data. To test the level of randomness of the output of /dev/random , use the rngtest tool as follows: A high number of failures shown in the output of the rngtest tool indicates that the randomness of the tested data is sub-optimal and should not be relied upon. See the rngtest (1) manual page for a list of options available for the rngtest utility. | [
"~]# yum install rng-tools",
"~]# service rngd start",
"~]# service rngd status",
"~]# rngd --rng-device= /dev/hwrng",
"~]USD cat /dev/random | rngtest -c 1000 rngtest 2 Copyright (c) 2004 by Henrique de Moraes Holschuh This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. rngtest: starting FIPS tests rngtest: bits received from input: 20000032 rngtest: FIPS 140-2 successes: 1000 rngtest: FIPS 140-2 failures: 0 rngtest: FIPS 140-2(2001-10-10) Monobit: 0 rngtest: FIPS 140-2(2001-10-10) Poker: 0 rngtest: FIPS 140-2(2001-10-10) Runs: 0 rngtest: FIPS 140-2(2001-10-10) Long run: 1 rngtest: FIPS 140-2(2001-10-10) Continuous run: 0 rngtest: input channel speed: (min=308.697; avg=623.670; max=730.823)Kibits/s rngtest: FIPS tests speed: (min=51.971; avg=137.737; max=167.311)Mibits/s rngtest: Program run time: 31461595 microseconds"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-encryption-using_the_random_number_generator |
Chapter 10. Uninstalling a cluster on OpenStack | Chapter 10. Uninstalling a cluster on OpenStack You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP). 10.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note If you deployed your cluster to the AWS C2S Secret Region, the installation program does not support destroying the cluster; you must manually remove the cluster resources. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. For example, some Google Cloud resources require IAM permissions in shared VPC host projects, or there might be unused health checks that must be deleted . Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_openstack/uninstalling-cluster-openstack |
Chapter 4. Example applications with Red Hat build of Kogito microservices | Chapter 4. Example applications with Red Hat build of Kogito microservices Red Hat build of Kogito microservices include example applications in the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file. These example applications contain various types of services on Red Hat build of Quarkus or Spring Boot to help you develop your own applications. The services use one or more Decision Model and Notation (DMN) decision models, Drools Rule Language (DRL) rule units, Predictive Model Markup Language (PMML) models, or Java classes to define the service logic. For information about each example application and instructions for using them, see the README file in the relevant application folder. Note When you run examples in a local environment, ensure that the environment matches the requirements that are listed in the README file of the relevant application folder. Also, this might require making the necessary network ports available, as configured for Red Hat build of Quarkus, Spring Boot, and docker-compose where applicable. The following list describes some of the examples provided with Red Hat build of Kogito microservices: Note These quick start examples showcase a supported setup. Other quickstarts not listed might use technology that is provided by the upstream community only and therefore not fully supported by Red Hat. Decision services dmn-quarkus-example and dmn-springboot-example : A decision service (on Red Hat build of Quarkus or Spring Boot) that uses DMN to determine driver penalty and suspension based on traffic violations. rules-quarkus-helloworld : A Hello World decision service on Red Hat build of Quarkus with a single DRL rule unit. ruleunit-quarkus-example and ruleunit-springboot-example : A decision service (on Red Hat build of Quarkus or Spring Boot) that uses DRL with rule units to validate a loan application and that exposes REST operations to view application status. dmn-pmml-quarkus-example and dmn-pmml-springboot-example : A decision service (on Red Hat build of Quarkus or Spring Boot) that uses DMN and PMML to determine driver penalty and suspension based on traffic violations. dmn-drools-quarkus-metrics and dmn-drools-springboot-metrics : A decision service (on Red Hat build of Quarkus or Spring Boot) that enables and consumes the runtime metrics monitoring feature in Red Hat build of Kogito. pmml-quarkus-example and pmml-springboot-example : A decision service (on Red Hat build of Quarkus or Spring Boot) that uses PMML. For more information, see Designing a decision service using DMN models , Designing a decision service using DRL rules , and Designing a decision service using PMML models . | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/getting_started_with_red_hat_build_of_kogito_in_red_hat_decision_manager/ref-kogito-microservices-app-examples_getting-started-kogito-microservices |
probe::sunrpc.svc.drop | probe::sunrpc.svc.drop Name probe::sunrpc.svc.drop - Drop RPC request Synopsis sunrpc.svc.drop Values rq_xid the transmission id in the request sv_name the service name rq_prot the IP protocol of the reqeust peer_ip the peer address where the request is from rq_proc the procedure number in the request rq_vers the program version in the request rq_prog the program number in the request | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sunrpc-svc-drop |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_certified_cloud_and_service_provider_certification/2025/html/red_hat_cloud_instance_type_policy_guide/con-conscious-language-message |
Chapter 6. Installing a cluster on AWS in a restricted network | Chapter 6. Installing a cluster on AWS in a restricted network In OpenShift Container Platform version 4.15, you can install a cluster on Amazon Web Services (AWS) in a restricted network by creating an internal mirror of the installation release content on an existing Amazon Virtual Private Cloud (VPC). 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You mirrored the images for a disconnected installation to your registry and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You have an existing VPC in AWS. When installing to a restricted network using installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements: Contains the mirror registry Has firewall rules or a peering connection to access the mirror registry hosted elsewhere You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix) in the AWS documentation. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note If you are configuring a proxy, be sure to also review this site list. 6.2. About installations in restricted networks In OpenShift Container Platform 4.15, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 6.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 6.3. About using a custom VPC In OpenShift Container Platform 4.15, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company's guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option. Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself. 6.3.1. Requirements for using your VPC The installation program no longer creates the following components: Internet gateways NAT gateways Subnets Route tables VPCs VPC DHCP options VPC endpoints Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC. The installation program cannot: Subdivide network ranges for the cluster to use. Set route tables for the subnets. Set VPC options like DHCP. You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC. Your VPC must meet the following characteristics: The VPC must not use the kubernetes.io/cluster/.*: owned , Name , and openshift.io/cluster tags. The installation program modifies your subnets to add the kubernetes.io/cluster/.*: shared tag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use a Name tag, because it overlaps with the EC2 Name field and the installation fails. If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the kubernetes.io/cluster/unmanaged: true tag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. You must enable the enableDnsSupport and enableDnsHostnames attributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster's internal DNS records. See DNS Support in Your VPC in the AWS documentation. If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the platform.aws.hostedZone and platform.aws.hostedZoneRole fields in the install-config.yaml file. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use the Passthrough or Manual credentials mode. If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available: Option 1: Create VPC endpoints Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com With this option, network traffic remains private between your VPC and the required AWS services. Option 2: Create a proxy without VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services. Option 3: Create a proxy with VPC endpoints As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows: ec2.<aws_region>.amazonaws.com elasticloadbalancing.<aws_region>.amazonaws.com s3.<aws_region>.amazonaws.com When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services. Required VPC components You must provide a suitable VPC and subnets that allow communication to your machines. Component AWS type Description VPC AWS::EC2::VPC AWS::EC2::VPCEndpoint You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. Public subnets AWS::EC2::Subnet AWS::EC2::SubnetNetworkAclAssociation Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. Internet gateway AWS::EC2::InternetGateway AWS::EC2::VPCGatewayAttachment AWS::EC2::RouteTable AWS::EC2::Route AWS::EC2::SubnetRouteTableAssociation AWS::EC2::NatGateway AWS::EC2::EIP You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. Network access control AWS::EC2::NetworkAcl AWS::EC2::NetworkAclEntry You must allow the VPC to access the following ports: Port Reason 80 Inbound HTTP traffic 443 Inbound HTTPS traffic 22 Inbound SSH traffic 1024 - 65535 Inbound ephemeral traffic 0 - 65535 Outbound ephemeral traffic Private subnets AWS::EC2::Subnet AWS::EC2::RouteTable AWS::EC2::SubnetRouteTableAssociation Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. 6.3.2. VPC validation To ensure that the subnets that you provide are suitable, the installation program confirms the following data: All the subnets that you specify exist. You provide private subnets. The subnet CIDRs belong to the machine CIDR that you specified. You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone. You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for. If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used. 6.3.3. Division of permissions Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules. The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes. 6.3.4. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: You can install multiple OpenShift Container Platform clusters in the same VPC. ICMP ingress is allowed from the entire network. TCP 22 ingress (SSH) is allowed to the entire network. Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 ingress (MCS) is allowed to the entire network. 6.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 6.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Define the subnets for the VPC to install the cluster in: subnets: - subnet-1 - subnet-2 - subnet-3 Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for AWS 6.6.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.6.2. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 1 12 14 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 8 15 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 13 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 16 If you provide your own VPC, specify subnets for each availability zone that your cluster uses. 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 The ID of your existing Route 53 private hosted zone. Providing an existing hosted zone requires that you supply your own VPC and the hosted zone is already associated with the VPC prior to installing your cluster. If undefined, the installation program creates a new hosted zone. 20 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 21 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 22 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 23 Provide the contents of the certificate file that you used for your mirror registry. 24 Provide the imageContentSources section from the output of the command to mirror the repository. 6.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials . 6.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 6.8.2. Configuring an AWS cluster to use short-term credentials To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. 6.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). You have created an AWS account for the ccoctl utility to use with the following permissions: Example 6.1. Required AWS permissions Required iam permissions iam:CreateOpenIDConnectProvider iam:CreateRole iam:DeleteOpenIDConnectProvider iam:DeleteRole iam:DeleteRolePolicy iam:GetOpenIDConnectProvider iam:GetRole iam:GetUser iam:ListOpenIDConnectProviders iam:ListRolePolicies iam:ListRoles iam:PutRolePolicy iam:TagOpenIDConnectProvider iam:TagRole Required s3 permissions s3:CreateBucket s3:DeleteBucket s3:DeleteObject s3:GetBucketAcl s3:GetBucketTagging s3:GetObject s3:GetObjectAcl s3:GetObjectTagging s3:ListBucket s3:PutBucketAcl s3:PutBucketPolicy s3:PutBucketPublicAccessBlock s3:PutBucketTagging s3:PutObject s3:PutObjectAcl s3:PutObjectTagging Required cloudfront permissions cloudfront:ListCloudFrontOriginAccessIdentities cloudfront:ListDistributions cloudfront:ListTagsForResource If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the ccoctl utility requires the following additional permissions: Example 6.2. Additional permissions for a private S3 bucket with CloudFront cloudfront:CreateCloudFrontOriginAccessIdentity cloudfront:CreateDistribution cloudfront:DeleteCloudFrontOriginAccessIdentity cloudfront:DeleteDistribution cloudfront:GetCloudFrontOriginAccessIdentity cloudfront:GetCloudFrontOriginAccessIdentityConfig cloudfront:GetDistribution cloudfront:TagResource cloudfront:UpdateDistribution Note These additional permissions support the use of the --create-private-s3-bucket option when processing credentials requests with the ccoctl aws create-all command. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 6.8.2.2. Creating AWS resources with the Cloud Credential Operator utility You have the following options when creating AWS resources: You can use the ccoctl aws create-all command to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command . If you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually . 6.8.2.2.1. Creating AWS resources with a single command If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources. Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-all \ --name=<name> \ 1 --region=<aws_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> \ 4 --create-private-s3-bucket 5 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the AWS region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 5 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 6.8.2.2.2. Creating AWS resources individually You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments. Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command". Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Extract and prepare the ccoctl binary. Procedure Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command: USD ccoctl aws create-key-pair Example output 2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer where serviceaccount-signer.private and serviceaccount-signer.public are the generated key files. This command also creates a private key that the cluster requires during installation in /<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key . Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command: USD ccoctl aws create-identity-provider \ --name=<name> \ 1 --region=<aws_region> \ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3 1 <name> is the name used to tag any cloud resources that are created for tracking. 2 <aws-region> is the AWS region in which cloud resources will be created. 3 <path_to_ccoctl_output_dir> is the path to the public key file that the ccoctl aws create-key-pair command generated. Example output 2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com where openid-configuration is a discovery document and keys.json is a JSON web key set file. This command also creates a YAML configuration file in /<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml . This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens. Create IAM roles for each component in the cluster: Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com Note For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the --region parameter. If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. For each CredentialsRequest object, ccoctl creates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml You can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles. 6.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 6.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.10. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.11. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 6.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 6.13. steps Validate an installation . Customize your cluster . Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"./openshift-install create install-config --dir <installation_directory> 1",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"subnets: - subnet-1 - subnet-2 - subnet-3",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"publish: Internal",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 13 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 14 propagateUserTags: true 15 userTags: adminContact: jdoe costCenter: 7536 subnets: 16 - subnet-1 - subnet-2 - subnet-3 amiID: ami-0c5d3e03c0ab9b19a 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com hostedZone: Z3URY6TWQ91KVV 19 fips: false 20 sshKey: ssh-ed25519 AAAA... 21 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 22 additionalTrustBundle: | 23 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- imageContentSources: 24 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-all --name=<name> \\ 1 --region=<aws_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> \\ 4 --create-private-s3-bucket 5",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"ccoctl aws create-key-pair",
"2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installer",
"ccoctl aws create-identity-provider --name=<name> \\ 1 --region=<aws_region> \\ 2 --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public 3",
"2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_credentials_requests_directory> --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_aws/installing-restricted-networks-aws-installer-provisioned |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/security_and_hardening_guide/making-open-source-more-inclusive |
Chapter 5. June 2024 | Chapter 5. June 2024 5.1. Node networking costs for OpenShift on cloud Costs associated with ingress and egress network traffic for individual nodes are now separated. A new project called network unattributed shows costs related to network traffic. You can use a cost model to distribute theses network costs in a similar way to platform and worker unallocated costs. 5.2. Default to AWS savings plan If you have an AWS savings plan for the EC2 instances running on OpenShift nodes, cost management uses the savings plan cost by default. | null | https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/whats_new_in_cost_management/june_2024 |
Installing on Azure Stack Hub | Installing on Azure Stack Hub OpenShift Container Platform 4.14 Installing OpenShift Container Platform on Azure Stack Hub Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure_stack_hub/index |
Chapter 1. Overview | Chapter 1. Overview Camel K is set to deprecate in favor of an unified Camel approach to OpenShift. Targeting the Red Hat Build of Camel for Quarkus, we aim to provide existing customers with a migration path to transition their Camel K integrations. This approach ensures a seamless migration to the Red Hat Build of Apache Camel for Quarkus, requiring minimal effort while considering the supported features of both Camel K and the Red Hat Build of Apache Camel for Quarkus. You must understand the Quarkus way to build, configure, deploy and run applications. Section 1.1, "Assumptions" Section 1.2, "Traits" Section 1.3, "Kamel run configuration" Section 1.4, "Kamelets, KameletBindings and Pipes" Section 1.5, "Migration Process" Section 1.6, "Troubleshooting" Section 1.7, "Known Issues" Section 1.8, "Reference documentation" 1.1. Assumptions The required source files to migrate are in java, xml or yaml. The target system to deploy is an OpenShift Cluster 4.15+. Camel K version is 1.10.7. The migration is to the Red Hat build of Apache Camel for Quarkus. Camel K operates using the Kamel CLI to run integrations, while the Camel K Operator manages and deploys them as running pods along with various Kubernetes objects, including Deployment, Service, Route, ConfigMap, Secret, and Knative. Note The running java program is a Camel on Quarkus application. When using the Red Hat build of Apache Camel for Quarkus, the starting point is a Maven project that contains all the artifacts needed to build and run the integration. This project will include a Deployment, Service, ConfigMap, and other resources, although their configurations may differ from those in Camel K. For instance, properties might be stored in an application.properties file, and Knative configurations may require separate files. The main goal is to ensure the integration route is deployed and running in an OpenShift cluster. 1.1.1. Requirements To perform the migration, following set of tools and configurations are required. Camel JBang 4.7.0 . JDK 17 or 21. Maven (mvn cli) 3.9.5. oc cli . OpenShift cluster 4.12+. Explore the Supported Configurations and Component Details about Red Hat build of Apache Camel. 1.1.2. Out of scope Use of Camel Spring Boot (CSB) as a target. The migration path is similar but should be tailored for CSB and JKube. Refer the documentation for numerous examples . OpenShift management. Customization of maven project. 1.1.3. Use cases Camel K integrations can vary, typically consisting of several files that correspond to integration routes and configurations. The integration routes may be defined in Java, XML, or YAML, while configurations can be specified in properties files or as parameters in the kamel run command. This migration document addresses use cases involving KameletBinding, Kamelet, Knative, and properties in ConfigMap. 1.1.4. Versions Note Camel K 1.10.7 uses different versions of Camel and Quarkus that the Red Hat build of Apache Camel for Quarkus. Table 1.1. Camel K Artifact Camel K Red Hat build of Apache Camel for Quarkus JDK 11 21 (preferred), 17 (supported) Camel 3.18.6.redhat-00009 4.4.0.redhat-00025 Camel for Quarkus 2.13.3.redhat-00011 3.8.0.redhat-00006 Quarkus Platform 2.13.9.SP2-redhat-00003 3.8.5.redhat-00003 Kamelet Catalog 1.10.7 2.3.x Migrating from Camel K to Red Hat build of Apache Camel for Quarkus updates several libraries simultaneously. Therefore, you may encounter some errors when building or running the integration in Red Hat build of Apache Camel for Quarkus, due to differences in the underlying libraries. 1.1.5. Project and Organization Camel K integration routes originate from a single file in java, yaml or xml. There is no concept of a project to organize the dependencies and builds. At the end, each kamel run <my app> results in a running pod. Red Hat build of Apache Camel for Quarkus requires a maven project. Use the camel export <many files> to generate the maven project. On building the project, the container image contains all the integration routes defined in the project. If you want one pod for each integration route, you must create a maven project for each integration route. While there are many complex ways to use a single maven project with multiple integration routes and custom builds to generate container images with different run entrypoints to start the pod, this is beyond the scope of this migration guide. 1.2. Traits Traits in Camel K provide an easy way for the operator, to materialize parameters from kamel cli to kubernetes objects and configurations. Only a few traits are supported in Camel K 1.10, that are covered in this migration path. There is no need to cover the configuration in the migration path for the following traits: camel, platform, deployment, dependencies, deployer, openapi. The following list contains the traits with their parameters and equivalents in Red Hat build of Apache Camel for Quarkus. Note The properties for Red Hat build of Apache Camel for Quarkus must be set in application.properties . On building the project, kubernetes appearing in target/kubernetes/openshift.yml must contain the properties. For more information about properties, see Quarkus OpenShift Extension . Table 1.2. Builder Trait Trait Parameter Quarkus Parameter builder.properties Add the properties to application.properties Table 1.3. Container Trait Trait Parameter Quarkus Parameter container.expose The Service kubernetes object is created automatically. container.image No replacement in Quarkus, since this property was meant for sourceless Camel K integrations, which are not supported in Red Hat build of Apache Camel for Quarkus. container.limit-cpu quarkus.openshift.resources.limits.cpu container.limit-memory quarkus.openshift.resources.limits.memory container.liveness-failure-threshold quarkus.openshift.liveness-probe.failure-threshold container.liveness-initial-delay quarkus.openshift.liveness-probe.initial-delay container.liveness-period quarkus.openshift.liveness-probe.period container.liveness-success-threshold quarkus.openshift.liveness-probe.success-threshold container.liveness-timeout quarkus.openshift.liveness-probe.timeout container.name quarkus.openshift.container-name container.port quarkus.openshift.ports."<port name>".container-port container.port-name Set the port name in the property name. The syntax is: quarkus.openshift.ports."<port name>".container-port . Example for https port is quarkus.openshift.ports.https.container-port . container.probes-enabled Add the quarkus maven dependency to the pom.xml <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency> It will also add the startup probe to the container. Note that, delay, timeout and period values may be different. container.readiness-failure-threshold quarkus.openshift.readiness-probe.failure-threshold container.readiness-initial-delay quarkus.openshift.readiness-probe.initial-delay container.readiness-period quarkus.openshift.readiness-probe.period container.readiness-success-threshold quarkus.openshift.readiness-probe.success-threshold container.readiness-timeout quarkus.openshift.readiness-probe.timeout container.request-cpu quarkus.openshift.resources.requests.cpu container.request-memory quarkus.openshift.resources.requests.memory container.service-port quarkus.openshift.ports.<port-name>.host-port container.service-port-name Set the port name in the property name. The syntax is: quarkus.openshift.ports."<port name>".host-port . Example for https port is quarkus.openshift.ports.https.host-port . Also, ensure to set the route port name to quarkus.openshift.route.target-port . Table 1.4. Environment Trait Trait Parameter Quarkus Parameter environment.vars quarkus.openshift.env.vars.<key>=<value> environment.http-proxy You must set the proxy host with the values of: quarkus.kubernetes-client.http-proxy quarkus.kubernetes-client.https-proxy quarkus.kubernetes-client.no-proxy Table 1.5. Error Handler Trait Trait Parameter Quarkus Parameter error-handler.ref You must manually add the Error Handler in the integration route. Table 1.6. JVM Trait Trait Parameter Quarkus Parameter jvm.debug quarkus.openshift.remote-debug.enabled jvm.debug-suspend quarkus.openshift.remote-debug.suspend jvm.print-command No replacement. jvm.debug-address quarkus.openshift.remote-debug.address-port jvm.options Edit src/main/docker/Dockerfile.jvm and change the JAVA_OPTS value to set the desired values. Example to increase the camel log level to debug: Note: The Docker configuration is dependent on the base image, configuration for OpenJDK 21 . jvm.classpath You must set the classpath at the maven project, so the complete list of dependencies are collected in the target/quarkus-app/ and later packaged in the containter image. Table 1.7. Node Affinity Trait Trait Parameter Quarkus Parameter There is no affinity configuration in Quarkus. Table 1.8. Owner Trait Trait Parameter Quarkus Parameter owner.enabled There is no owner configuration in Quarkus. Table 1.9. Quarkus Trait Trait Parameter Quarkus Parameter quarkus.package-type For native builds, use -Dnative . Table 1.10. Knative Trait Trait Parameter Quarkus Parameter knative.enabled Add the maven dependency org.apache.camel.quarkus:camel-quarkus-knative to the pom.xml, and set the following properties: The quarkus.container-image.* properties are required by the quarkus maven plugin to set the image url in the generated knative.yml. knative.configuration camel.component.knative.environmentPath knative.channel-sources Configurable in the knative.json. knative.channel-sinks Configurable in the knative.json. knative.endpoint-sources Configurable in the knative.json. knative.endpoint-sinks Configurable in the knative.json. knative.event-sources Configurable in the knative.json. knative.event-sinks Configurable in the knative.json. knative.filter-source-channels Configurable in the knative.json. knative.sink-binding No replacement, you must create the SinkBinding object. knative.auto No replacement. knative.namespace-label You must set the label bindings.knative.dev/include=true manually to the desired namespace. Table 1.11. Knative Service Trait Trait Parameter Quarkus Parameter knative-service.enabled quarkus.kubernetes.deployment-target=knative knative-service.annotations quarkus.knative.annotations.<annotation-name>=<value> knative-service.autoscaling-class quarkus.knative.revision-auto-scaling.auto-scaler-class knative-service.autoscaling-metric quarkus.knative.revision-auto-scaling.metric knative-service.autoscaling-target quarkus.knative.revision-auto-scaling.target knative-service.min-scale quarkus.knative.min-scale knative-service.max-scale quarkus.knative.max-scale knative-service.rollout-duration quarkus.knative.annotations."serving.knative.dev/rollout-duration" knative-service.visibility quarkus.knative.labels."networking.knative.dev/visibility" It must be in quotation marks. knative-service.auto This behavior is unnecessary in Red Hat build of Apache Camel for Quarkus. Table 1.12. Prometheus Trait Trait Parameter Quarkus Parameter prometheus.enabled Add the following maven dependencies to pom.xml <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-micrometer</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency> Note: Camel K creates a PodMonitor object, while Quarkus creates a ServiceMonitor object, both are correct to configure the monitoring feature. prometheus.pod-monitor quarkus.openshift.prometheus.generate-service-monitor prometheus.pod-monitor-labels No quarkus property is available to set custom labels, but you can configure the labels in ServiceMonitor object in target/kubernetes/openshift.yml before deploying. Table 1.13. PodDisruptionBudget (PDB) Trait Trait Parameter Quarkus Parameter There is no Quarkus configuration for PodDisruptionBudget objects. Table 1.14. Pull Secret Trait Trait Parameter Quarkus Parameter pull-secret.secret-name quarkus.openshift.image-pull-secrets Table 1.15. Route Trait Trait Parameter Quarkus Parameter route.enabled quarkus.openshift.route.expose route.annotations quarkus.openshift.route.annotations.<key>=<value route.host quarkus.openshift.route.host route.tls-termination quarkus.openshift.route.tls.termination route.tls-certificate quarkus.openshift.route.tls.certificate route.tls-certificate-secret There is no quarkus property to read the certificate from a secret. route.tls-key quarkus.openshift.route.tls.key route.tls-key-secret There is no quarkus property to read the key from a secret. route.tls-ca-certificate quarkus.openshift.route.tls.ca-certificate route.tls-ca-certificate-secret There is no quarkus property to read the CA certificate from a secret. route.tls-destination-ca-certificate quarkus.openshift.route.tls.destination-ca-certificate route.tls-destination-ca-certificate-secret There is no quarkus property to read the destination certificate from a secret. route.tls-insecure-edge-termination-policy quarkus.openshift.route.tls.insecure-edge-termination-policy Table 1.16. Service Trait Trait Parameter Quarkus Parameter service.enabled The Service kubernetes object is created automatically. To disable it, you must remove the kind: Service from target/kubernetes/openshift.yml before deployment. 1.3. Kamel run configuration There are additional configuration parameters in the kamel run command listed below, along with their equivalents in the Red Hat build of Apache Camel for Quarkus, which must be added in src/main/resources/application.properties or pom.xml . kamel run parameter Quarkus Parameter --annotation quarkus.openshift.annotations.<annotation-name>=<value> --build-property Add the property in the <properties> tag of the pom.xml . --dependency Add the dependency in pom.xml . --env quarkus.openshift.env.vars.<env-name>=<value> --label quarkus.openshift.labels.<label-name>=<value> --maven-repository Add the repository in pom.xml or use the camel export --repos=<my repo> . --logs oc logs -f `oc get pod -l app.kubernetes.io/name=<artifact name> -oname` --volume quarkus.openshift.mounts.<my-volume>.path=</where/to/mount > 1.4. Kamelets, KameletBindings and Pipes Camel K operator bundles the Kamelets and installs them as kubernetes objects. For Red Hat build of Apache Camel for Quarkus project, you must manage kamelets yaml files in the maven project. There are two ways to manage the kamelets yaml files. Kamelets are packaged and released as maven artifact org.apache.camel.kamelets:camel-kamelets . You can add this dependency to pom.xml , and when the camel route starts, it loads the kamelet yaml files from that jar file in classpath. There are opensource kamelets and the ones produced by Red Hat, whose artifact suffix is redhat-000nnn . For example:`1.10.7.redhat-00015`. These are available from the Red Hat maven repository . 2.Add the kamelet yaml files in src/main/resources/kamelets directory, that are later packaged in the final deployable artifact. Do not declare the org.apache.camel.kamelets:camel-kamelets in pom.xml . This way, the camel route loads the Kamelet yaml file from the packaged project. KameletBinding was renamed to Pipe . So consider this to understand the use case 3. While the kubernetes resource name KameletBinding is still supported, it is deprecated. We recommend renaming it to Pipe as soon as possible. We recommend to update the Kamelets, as there were many updates since Camel K 1.10.7. For example, you can compare the jms-amqp-10-sink.kamelet.yaml of 1.10 and 2.3 If you have custom Kamelets, you must update them accordingly. rename flow to template in Kamelet files. rename property to properties for the bean properties. 1.4.1. Knative When running integration routes with Knative endpoints in Camel K, the Camel K Operator creates some Knative objects such as: SinkBindings , Trigger , Subscription . Also, Camel K Operator creates the knative.json environment file, required for camel-knative component to interact with the Knative objects deployed in the cluster. Example of a knative.json { "services": [ { "type": "channel", "name": "messages", "url": "{{k.sink}}", "metadata": { "camel.endpoint.kind": "sink", "knative.apiVersion": "messaging.knative.dev/v1", "knative.kind": "Channel", "knative.reply": "false" } } ] } Red Hat build of Apache Camel for Quarkus is a maven project. You must create those Knative files manually and provide additional configuration. See use case 2 for the migration of an integration route with Knative endpoints. 1.4.2. Monitoring We recommend you to add custom labels to identify the kubernetes objects installed in the cluster, to allow an easier way to locate these kubernetes. By default, the quarkus openshift extension adds the label app.kubernetes.io/name=<app name> , so you can search the objects created using this label. For monitoring purposes, you can use the HawtIO Diagnostic Console to monitor the Camel applications. 1.5. Migration Process The migration process is composed of the following steps. Task Description Create the maven project Use the camel cli from Camel JBang to export the files, it will create a maven project. Adjust the configuration Configure the project by adding and changing files. Build Building the project will generate the JAR files. Build the container image and push to a container registry. Deploy Deploy the kubernetes objects to the Openshift cluster and run the pod. 1.5.1. Migration Steps 1.5.1.1. Use Case 1 - Simple Integration Route with Configuration Given the following integration route, featuring rest and kamelet endpoints. import org.apache.camel.builder.RouteBuilder; public class Http2Jms extends RouteBuilder { @Override public void configure() throws Exception { rest() .post("/message") .id("rest") .to("direct:jms"); from("direct:jms") .log("Sending message to JMS {{broker}}: USD{body}") .to("kamelet:jms-amqp-10-sink?remoteURI=amqp://myhost:61616&destinationName=queue"); } } The http2jms.properties file The kamel run command It builds and runs the pod with the annotations. Environment variable and the properties file are added as a ConfigMap and mounted in the pod. 1.5.1.1.1. Step 1 - Create the maven project Use camel jbang to export the file into a maven project. camel export \ --runtime=quarkus \ --quarkus-group-id=com.redhat.quarkus.platform \ --quarkus-version=3.8.5.redhat-00003 \ --repos=https://maven.repository.redhat.com/ga \ --dep=io.quarkus:quarkus-openshift \ --gav=com.mycompany:ceq-app:1.0 \ --dir=ceq-app1 \ Http2Jms.java Description of the parameters: Parameter Description --runtime=quarkus Use the Quarkus runtime. The generated project contains the quarkus BOM. --quarkus-group-id=com.redhat.quarkus.platform The Red Hat supported quarkus platform maven artifact group is com.redhat.quarkus.platform . --quarkus-version=3.8.5.redhat-00003 This is the latest supported version at the time. Check the Quarkus documentation for a recent release version. --repos=https://maven.repository.redhat.com/ga Use the Red Hat Maven repository with the GA releases. --dep=io.quarkus:quarkus-openshift Adds the quarkus-openshift dependency to pom.xml ,to build in Openshift. --gav=com.mycompany:ceq-app:1.0 Set a GAV to the generated pom.xml. You must set a GAV accordingly to your project. --dir=ceq-app1 The maven project directory. You can see more parameters with camel export --help If you are using kamelets, it must be part of the maven project. You can download the Kamelet repository and unzip it. If you have any custom kamelets, add them to this kamelet directory. While using camel export , you can use the parameter --local-kamelet-dir=<kamelet directory> that copies all kamelets to src/main/resources/kamelets , which are later packed into the final archive. If you choose not to use the --local-kamelet-dir=<kamelet directory> parameter, then you must manually copy the desired kamelet yaml files to the above mentioned directory. Track the artifact name in the generated pom, as the artifact name is used in the generated Openshift files (Deployment, Service, Route, etc.). 1.5.1.1.2. Step 2 - Configure the project This is the step to configure the maven project and artifacts to suit your environment. Get into the maven project cd ceq-app1 Set the docker build strategy. echo quarkus.openshift.build-strategy=docker >> src/main/resources/application.properties Change the base image to OpenJDK 21 in src/main/docker (optional) FROM registry.access.redhat.com/ubi9/openjdk-21:1.20 Change the compiler version to 21 in pom.xml (optional) <maven.compiler.release>21</maven.compiler.release> Set the environment variables, labels and annotations in src/main/resources/application.properties , if you need them. If you want to customize the image and container registry settings with these parameters: quarkus.container-image.registry quarkus.container-image.group quarkus.container-image.name quarkus.container-image.tag As there is a http2jms.properties with configuration used at runtime, kamel cli creates a ConfigMap and mount it in the pod. We must achieve the same with Red Hat build of Apache Camel for Quarkus. Create a local ConfigMap file named ceq-app in `src/main/kubernetes/common.yml which will be a part of the image build process. The following command sets the ConfigMap key as application.properties oc create configmap ceq-app --from-file application.properties=http2jms.properties --dry-run=client -oyaml > src/main/kubernetes/common.yml Add the following property to application.properties , for Quarkus to mount the ConfigMap . 1.5.1.1.3. Step 3 - Build Build the package for local inspection. ./mvnw -ntp package This step builds the maven artifacts (JAR files) locally and generates the Openshift files in target/kubernetes directory. Track the target/kubernetes/openshift.yml to understand the deployment that is deployed to the Openshift cluster. 1.5.1.1.4. Step 4 - Build and Deploy Build the package and deploy to Openshift ./mvnw -ntp package -Dquarkus.openshift.deploy=true You can follow the image build in the maven output. After the build, you can see the pod running. 1.5.1.1.5. Step 5 - Test Verify if the integration route is working. If the project can run locally, you can try the following. mvn -ntp quarkus:run Follow the pod container log oc logs -f `oc get pod -l app.kubernetes.io/name=app -oname` It must show something like the following output: INFO exec -a "java" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp "." -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] broker=amqp://172.30.177.216:61616 [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] queue=qtest [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] destinationName=qtest [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] connectionFactoryBean=connectionFactoryBean-1 [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] remoteURI=amqp://172.30.177.216:61616 [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:3) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (direct://jms) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started rest (rest://post:/message) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started jms-amqp-10-sink-1 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 17ms (build:0ms init:0ms start:17ms) [io.quarkus] (main) app 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.115s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-amqp, camel-attachments, camel-core, camel-direct, camel-jms, camel-kamelet, camel-microprofile-health, camel-platform-http, camel-rest, camel-rest-openapi, camel-yaml-dsl, cdi, kubernetes, qpid-jms, smallrye-context-propagation, smallrye-health, vertx] See the MicroProfilePropertiesSource line, it shows the content of the properties file added as a ConfigMap and mounted into the pod. 1.5.1.2. Use Case 2 - Knative Integration Route This use case features two Knative integration routes. The Feed route periodically sends a text message to a Knative channel, The second route Printer receives the message from the Knative channel and prints it. For Camel K, there are two pods, each one running a single integration route. So, this migration must create two projects, each one having one integration route. Later if you want, you can customize it to a single maven project with both integration routes in a single pod. The Feed integration route. import org.apache.camel.builder.RouteBuilder; public class Feed extends RouteBuilder { @Override public void configure() throws Exception { from("timer:clock?period=15s") .setBody().simple("Hello World from Camel - USD{date:now}") .log("sent message to messages channel: USD{body}") .to("knative:channel/messages"); } } The Printer integration route. import org.apache.camel.builder.RouteBuilder; public class Printer extends RouteBuilder { @Override public void configure() throws Exception { from("knative:channel/messages") .convertBodyTo(String.class) .to("log:info"); } } The kamel run command shows you how this runs with Camel K. kamel run Feed.java kamel run Printer.java There are going to be two pods running. 1.5.1.2.1. Step 1 - Create the maven project Use camel jbang to export the file into a full maven project Export the feed integration. camel export \ --runtime=quarkus \ --quarkus-group-id=com.redhat.quarkus.platform \ --quarkus-version=3.8.5.redhat-00003 \ --repos=https://maven.repository.redhat.com/ga \ --dep=io.quarkus:quarkus-openshift \ --gav=com.mycompany:ceq-feed:1.0 \ --dir=ceq-feed \ Feed.java Export the printer integration. camel export \ --runtime=quarkus \ --quarkus-group-id=com.redhat.quarkus.platform \ --quarkus-version=3.8.5.redhat-00003 \ --repos=https://maven.repository.redhat.com/ga \ --dep=io.quarkus:quarkus-openshift \ --gav=com.mycompany:ceq-printer:1.0 \ --dir=ceq-printer \ Printer.java A maven project will be created for each integration. 1.5.1.2.2. Step 2 - Configure the project This step is to configure the maven project and the artifacts to suit your environment. Use case 1 contains information about labels, annotation and configuration in ConfigMaps. Get into the maven project Set the docker build strategy. Change the base image to OpenJDK 21 in src/main/docker (optional) Change the compiler version to 21 in pom.xml (optional) Add openshift as a deployment target. You must set these container image properties, to set the image address in the generated openshift.yml and knative.yml file. Add the following property in application.properties to allow the Knative controller to inject the K_SINK environment variable to the deployment. Add the knative.json in src/main/resources . This is a required configuration for Camel to connect to the Knative channel. Note There is k.sink property placeholder. When the pod is running it will look at the environment variable named K_SINK and replace in the url value. { "services": [ { "type": "channel", "name": "messages", "url": "{{k.sink}}", "metadata": { "camel.endpoint.kind": "sink", "knative.apiVersion": "messaging.knative.dev/v1", "knative.kind": "Channel", "knative.reply": "false" } } ] } Add the following property to allow Camel to load the Knative environment configuration. To make the inject work, you must create a Knative SinkBinding object. Add the SinkBinding file to src/main/kubernetes/openshift.yml cat <<EOF >> src/main/kubernetes/openshift.yml apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: finalizers: - sinkbindings.sources.knative.dev name: ceq-feed spec: sink: ref: apiVersion: messaging.knative.dev/v1 kind: Channel name: messages subject: apiVersion: apps/v1 kind: Deployment name: ceq-feed EOF Now, configure the ceq-printer project. Set the docker build strategy. Change the base image to OpenJDK 21 in src/main/docker (optional) Change the compiler version to 21 in pom.xml (optional) Set knative as a deployment target. You must set these container image properties, to correctly set the image address in the generated openshift.yml and knative.yml file. Add the knative.json in src/main/resources . This is a required configuration for Camel to connect to the Knative channel. { "services": [ { "type": "channel", "name": "messages", "path": "/channels/messages", "metadata": { "camel.endpoint.kind": "source", "knative.apiVersion": "messaging.knative.dev/v1", "knative.kind": "Channel", "knative.reply": "false" } } ] } Add the following property to allow Camel to load the Knative environment configuration. A Knative Subscription is required for the message delivery from the channel to a sink. Add the Subscription file to src/main/kubernetes/knative.yml apiVersion: messaging.knative.dev/v1 kind: Subscription metadata: finalizers: - subscriptions.messaging.knative.dev name: ceq-printer spec: channel: apiVersion: messaging.knative.dev/v1 kind: Channel name: messages subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: ceq-printer uri: /channels/messages 1.5.1.2.3. Step 3 - Build Build the package for local inspection. ./mvnw -ntp package This step builds the maven artifacts (JAR files) locally and generates the Openshift files in target/kubernetes directory. Track the target/kubernetes/openshift.yml and `target/kubernetes/knative.yml`to understand the deployment that is deployed to the Openshift cluster. 1.5.1.2.4. Step 4 - Build and Deploy Build the package and deploy to Openshift. You can follow the image build in the maven output. After build, you can see the pod running. 1.5.1.2.5. Step 5 - Test Verify if the integration route is working. Follow the pod container log It must show like the following output: ceq-feed pod INFO exec -a "java" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp "." -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.mai.BaseMainSupport] (main) Auto-configuration summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] camel.component.knative.environmentPath=classpath:knative.json [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [OS Environment Variable] k.sink=http://hello-kn-channel.cmiranda-camel.svc.cluster.local [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:1) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (timer://clock) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 43ms (build:0ms init:0ms start:43ms) [io.quarkus] (main) ceq-feed 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.386s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cloudevents, camel-core, camel-knative, camel-platform-http, camel-rest, camel-rest-openapi, camel-timer, cdi, kubernetes, smallrye-context-propagation, vertx] [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:54:41 UTC 2024 [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:54:56 UTC 2024 [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:55:11 UTC 2024 See the Property-placeholders . It shows the k.sink property value. ceq-printer pod INFO exec -a "java" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp "." -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.mai.BaseMainSupport] (main) Auto-configuration summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] camel.component.knative.environmentPath=classpath:knative.json [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:1) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (knative://channel/hello) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 10ms (build:0ms init:0ms start:10ms) [io.quarkus] (main) ceq-printer 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.211s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cloudevents, camel-core, camel-knative, camel-log, camel-platform-http, camel-rest, camel-rest-openapi, cdi, kubernetes, smallrye-context-propagation, vertx] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:54:41 UTC 2024] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:54:56 UTC 2024] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:55:11 UTC 2024] 1.5.1.3. Use Case 3 - Pipe Given the following integration route as a KameletBinding. apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sample spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: period: 5000 contentType: application/json message: '{"id":"1","field":"hello","message":"Camel Rocks"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: extract-field-action properties: field: "message" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: log-sink properties: showStreams: true 1.5.1.3.1. Step 1 - Create the maven project Use camel jbang to export the file into a maven project. camel export \ --runtime=quarkus \ --quarkus-group-id=com.redhat.quarkus.platform \ --quarkus-version=3.8.5.redhat-00003 \ --repos=https://maven.repository.redhat.com/ga \ --dep=io.quarkus:quarkus-openshift \ --gav=com.mycompany:ceq-timer2log-kbind:1.0 \ --dir=ceq-timer2log-kbind \ timer-2-log-kbind.yaml You can see more parameters with camel export --help 1.5.1.3.2. Step 2 - Configure the project This is the step to configure the maven project and the artifacts to suit your environment. Note You can follow use cases 1 and 2 for the common configuration and we will provide the steps required for the KameletBinding configuration. You can try to run the integration route locally with camel jbang to see how it works, before building and deploying to Openshift. Get into the maven project cd ceq-timer2log-kbind See the note at the beginning about how to manage Kamelets. For this migration use case, I use the org.apache.camel.kamelets:camel-kamelets dependency in pom.xml . When exporting, it adds the following properties in application.properties , but you can remove it. Set the docker build strategy. If your Kamelet or KameletBinding has trait annotations like the following: trait.camel.apache.org/environment.vars: "my_key=my_val" , then you must follow the trait configuration section about how to set it using Quarkus properties. 1.5.1.3.3. Step 3 - Build Build the package for local inspection. ./mvnw -ntp package This step builds the maven artifacts (JAR files) locally and generates the Openshift manifest files in target/kubernetes directory. Track the target/kubernetes/openshift.yml to understand the deployment that is deployed to the Openshift cluster. 1.5.1.3.4. Step 4 - Build and Deploy Build the package and deploy to Openshift. ./mvnw -ntp package -Dquarkus.openshift.deploy=true You can follow the image build in the maven output. After build, you can see the pod running. 1.5.1.3.5. Step 5 - Test Verify if the integration route is working. Follow the pod container log oc logs -f `oc get pod -l app.kubernetes.io/name=ceq-timer2log-kbind -oname` It must show like the following output: [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.cli.con.LocalCliConnector] (main) Management from Camel JBang enabled [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] period=5000 [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] message={"id":"1","field":"hello","message":"Camel Rocks"} [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] contentType=application/json [org.apa.cam.mai.BaseMainSupport] (main) [log-sink.kamelet.yaml] showStreams=true [org.apa.cam.mai.BaseMainSupport] (main) [ct-field-action.kamelet.yaml] extractField=extractField-1 [org.apa.cam.mai.BaseMainSupport] (main) [ct-field-action.kamelet.yaml] field=message [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:4) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started sample (kamelet://timer-source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started timer-source-1 (timer://tick) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started log-sink-2 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started extract-field-action-3 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 276ms (build:0ms init:0ms start:276ms) [io.quarkus] (main) ceq-timer2log-kbind 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.867s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cli-connector, camel-console, camel-core, camel-direct, camel-jackson, camel-kamelet, camel-log, camel-management, camel-microprofile-health, camel-platform-http, camel-rest, camel-rest-openapi, camel-timer, camel-xml-jaxb, camel-yaml-dsl, cdi, kubernetes, smallrye-context-propagation, smallrye-health, vertx] [log-sink] (Camel (camel-1) thread #2 - timer://tick) Exchange[ExchangePattern: InOnly, BodyType: org.apache.camel.converter.stream.InputStreamCache, Body: "Camel Rocks"] 1.5.2. Undeploy kubernetes resources To delete all resouces installed by the quarkus-maven-plugin, you must run the following command. 1.5.3. Kubernetes CronJob Camel K has a feature when there is a consumer of type cron, quartz or timer.In some circumstances, it creates a kubernetes CronJob object instead of a regular Deployment . This saves computing resources by not running the Deployment Pod all time. To obtain the same outcome in Red Hat build of Apache Camel for Quarkus, you must set the following properties in src/main/resources/application.properties . And you must set the timer consumer to execute only once, as follows: from("timer:java?delay=0&period=1&repeatCount=1") The following are the timer parameters. delay=0 : Starts the consumer with no delay. period=1 : Run only once 1s. repeatCount=1 : Don't run after the first run. 1.6. Troubleshooting 1.6.1. Product Support If you encounter any problems during the migration process you can open a support case and we will help you resolve the issue. 1.6.2. Ignore loading errors when exporting with camel jbang When using camel jbang export, it may fail to load the routes. Here, you can use the --ignore-loading-error parameter, as follows: 1.6.3. Increase logging You can set a category logging, by using the following property in application.properties for org.apache.camel.component.knative category to debug level. 1.6.4. Disable health checks Your application pod may fail with CrashLoopBackOff and the following error appears in the log pod. If you do not want the container health checks, you can disable the container health check by removing this maven dependency from the pom.xml <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-microprofile-health</artifactId> </dependency> 1.7. Known Issues There are a few known issues related to migrating integration routes, along with their workarounds. These workarounds are not limitations of the Red Hat build of Apache Camel for Quarkus, but rather part of the migration process. Once the migration is complete, the resulting Maven project is customizable to meet customer needs. 1.7.1. Camel K features not available in Camel for Quarkus Some Camel K features are not available in Quarkus or Camel as a quarkus property. These features may require additional configuration steps to achieve the same functionality when building and deploying in Red Hat build of Apache Camel for Quarkus. 1.7.1.1. Owner Trait The owner trait sets the kubernetes owner fields for all created resources, simplifying the process of tracking who created a kubernetes resource. There is an open Quarkus issue #13952 requesting this feature. There is no workaround to set the owner fields. 1.7.1.2. Affinity Trait The node affinity trait enables you to constrain the nodes on which the integration pods are scheduled to run. There is an open Quarkus issue #13596 requesting this feature. The workaround would be to implement a post processing task after maven package step, to add the affinity configuration to target/kubernetes/openshift.yml . 1.7.1.3. PodDisruptionBudget Trait The PodDisruptionBudget trait allows to configure the PodDisruptionBudget resource for the Integration pods. There is configuration in Quarkus to generate the PodDisruptionBudget resource. The workaround would be to implement a post processing task after maven package step, to add the PodDisruptionBudget configuration to target/kubernetes/openshift.yml . 1.7.2. Camel Jbang fails to add camel-quarkus-direct dependency If the integration route contains a rest and a direct endpoint, as shown in the example below, verify that pom.xml contains camel-quarkus-direct dependency. If it is missing, you must add it. rest() .post("/message") .id("rest") .to("direct:foo"); from("direct:foo") .log("hello"); The camel-quarkus-direct dependency to add to the pom.xml <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-direct</artifactId> </dependency> 1.7.3. Quarkus build fails with The server certificate is not trusted by the client. Therefore, you must either add the server public key to the client or trust the server certificate. If you are testing, you can add the following property to the src/main/resources/application.properties and rebuild it. 1.7.4. Camel Jbang fails to export a route Camel Jbang fails to export a route when the route contains a kamelet endpoint, which is backed by a bean. If the endpoint contains a kamelet, with property placeholders {{broker}} , and in the kamelet there is a type: "#class:org.apache.qpid.jms.JmsConnectionFactory" to initialize the camel component, it may fail. The error is composed of the following errors. How to fix: Replace the property placeholders in the kamelet endpoint {{broker}} and {{queue}} with any value, for example: remoteURI=broker&destinationName=queue . Now export the file, and you can add the property placeholder back in the exported route in src/main/ directory. 1.8. Reference documentation For more details about Camel products, refer the following links. Red Hat build of Apache Camel for Quarkus releases Red Hat build of Apache Camel for Quarkus Documentation, including migration to Camel Spring Boot Camel K documentation Deploying a Camel Spring Boot application to OpenShift Deploying Red Hat build of Apache Camel for Quarkus applications Deploying your Red Hat build of Quarkus applications to OpenShift Container Platform Developer Resources for Red Hat Build of Quarkus Quarkus Configuration for Kubernetes Quarkus Configuration for Openshift | [
"<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-smallrye-health</artifactId> </dependency>",
"ENV JAVA_OPTS=\"USDJAVA_OPTS -Dquarkus.log.category.\\\"org.apache.camel\\\".level=debug\"",
"affinity.pod-affinity affinity.pod-affinity-labels affinity.pod-anti-affinity affinity.pod-anti-affinity-labels affinity.node-affinity-labels",
"quarkus.kubernetes.deployment-target=knative quarkus.container-image.group=<group-name> quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000",
"<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-micrometer</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency>",
"pdb.enabled pdb.min-available pdb.max-unavailable",
"{ \"services\": [ { \"type\": \"channel\", \"name\": \"messages\", \"url\": \"{{k.sink}}\", \"metadata\": { \"camel.endpoint.kind\": \"sink\", \"knative.apiVersion\": \"messaging.knative.dev/v1\", \"knative.kind\": \"Channel\", \"knative.reply\": \"false\" } } ] }",
"import org.apache.camel.builder.RouteBuilder; public class Http2Jms extends RouteBuilder { @Override public void configure() throws Exception { rest() .post(\"/message\") .id(\"rest\") .to(\"direct:jms\"); from(\"direct:jms\") .log(\"Sending message to JMS {{broker}}: USD{body}\") .to(\"kamelet:jms-amqp-10-sink?remoteURI=amqp://myhost:61616&destinationName=queue\"); } }",
"broker=amqp://172.30.177.216:61616 queue=qtest",
"kamel run Http2Jms.java -p file://USDPWD/http2jms.properties --annotation some_annotation=foo --env MY_ENV1=VAL1",
"camel export --runtime=quarkus --quarkus-group-id=com.redhat.quarkus.platform --quarkus-version=3.8.5.redhat-00003 --repos=https://maven.repository.redhat.com/ga --dep=io.quarkus:quarkus-openshift --gav=com.mycompany:ceq-app:1.0 --dir=ceq-app1 Http2Jms.java",
"cd ceq-app1",
"echo quarkus.openshift.build-strategy=docker >> src/main/resources/application.properties",
"FROM registry.access.redhat.com/ubi9/openjdk-21:1.20",
"<maven.compiler.release>21</maven.compiler.release>",
"quarkus.openshift.annotations.sample_annotation=sample_value1 quarkus.openshift.env.vars.SAMPLE_KEY=sample_value2 quarkus.openshift.labels.sample_label=sample_value3",
"quarkus.container-image.registry quarkus.container-image.group quarkus.container-image.name quarkus.container-image.tag",
"create configmap ceq-app --from-file application.properties=http2jms.properties --dry-run=client -oyaml > src/main/kubernetes/common.yml",
"quarkus.openshift.app-config-map=ceq-app",
"./mvnw -ntp package",
"./mvnw -ntp package -Dquarkus.openshift.deploy=true",
"mvn -ntp quarkus:run",
"logs -f `oc get pod -l app.kubernetes.io/name=app -oname`",
"INFO exec -a \"java\" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp \".\" -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] broker=amqp://172.30.177.216:61616 [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] queue=qtest [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] destinationName=qtest [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] connectionFactoryBean=connectionFactoryBean-1 [org.apa.cam.mai.BaseMainSupport] (main) [ms-amqp-10-sink.kamelet.yaml] remoteURI=amqp://172.30.177.216:61616 [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:3) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (direct://jms) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started rest (rest://post:/message) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started jms-amqp-10-sink-1 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 17ms (build:0ms init:0ms start:17ms) [io.quarkus] (main) app 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.115s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-amqp, camel-attachments, camel-core, camel-direct, camel-jms, camel-kamelet, camel-microprofile-health, camel-platform-http, camel-rest, camel-rest-openapi, camel-yaml-dsl, cdi, kubernetes, qpid-jms, smallrye-context-propagation, smallrye-health, vertx]",
"import org.apache.camel.builder.RouteBuilder; public class Feed extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:clock?period=15s\") .setBody().simple(\"Hello World from Camel - USD{date:now}\") .log(\"sent message to messages channel: USD{body}\") .to(\"knative:channel/messages\"); } }",
"import org.apache.camel.builder.RouteBuilder; public class Printer extends RouteBuilder { @Override public void configure() throws Exception { from(\"knative:channel/messages\") .convertBodyTo(String.class) .to(\"log:info\"); } }",
"kamel run Feed.java kamel run Printer.java",
"camel export --runtime=quarkus --quarkus-group-id=com.redhat.quarkus.platform --quarkus-version=3.8.5.redhat-00003 --repos=https://maven.repository.redhat.com/ga --dep=io.quarkus:quarkus-openshift --gav=com.mycompany:ceq-feed:1.0 --dir=ceq-feed Feed.java",
"camel export --runtime=quarkus --quarkus-group-id=com.redhat.quarkus.platform --quarkus-version=3.8.5.redhat-00003 --repos=https://maven.repository.redhat.com/ga --dep=io.quarkus:quarkus-openshift --gav=com.mycompany:ceq-printer:1.0 --dir=ceq-printer Printer.java",
"cd ceq-feed",
"echo quarkus.openshift.build-strategy=docker >> src/main/resources/application.properties",
"FROM registry.access.redhat.com/ubi9/openjdk-21:1.20",
"<maven.compiler.release>21</maven.compiler.release>",
"quarkus.kubernetes.deployment-target=openshift",
"quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000 quarkus.container-image.group=<namespace>",
"quarkus.openshift.labels.\"bindings.knative.dev/include\"=true",
"{ \"services\": [ { \"type\": \"channel\", \"name\": \"messages\", \"url\": \"{{k.sink}}\", \"metadata\": { \"camel.endpoint.kind\": \"sink\", \"knative.apiVersion\": \"messaging.knative.dev/v1\", \"knative.kind\": \"Channel\", \"knative.reply\": \"false\" } } ] }",
"camel.component.knative.environmentPath=classpath:knative.json",
"cat <<EOF >> src/main/kubernetes/openshift.yml apiVersion: sources.knative.dev/v1 kind: SinkBinding metadata: finalizers: - sinkbindings.sources.knative.dev name: ceq-feed spec: sink: ref: apiVersion: messaging.knative.dev/v1 kind: Channel name: messages subject: apiVersion: apps/v1 kind: Deployment name: ceq-feed EOF",
"cd ceq-printer",
"echo quarkus.openshift.build-strategy=docker >> src/main/resources/application.properties",
"FROM registry.access.redhat.com/ubi9/openjdk-21:1.20",
"<maven.compiler.release>21</maven.compiler.release>",
"quarkus.kubernetes.deployment-target=knative",
"quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000 quarkus.container-image.group=<namespace>",
"{ \"services\": [ { \"type\": \"channel\", \"name\": \"messages\", \"path\": \"/channels/messages\", \"metadata\": { \"camel.endpoint.kind\": \"source\", \"knative.apiVersion\": \"messaging.knative.dev/v1\", \"knative.kind\": \"Channel\", \"knative.reply\": \"false\" } } ] }",
"camel.component.knative.environmentPath=classpath:knative.json",
"apiVersion: messaging.knative.dev/v1 kind: Subscription metadata: finalizers: - subscriptions.messaging.knative.dev name: ceq-printer spec: channel: apiVersion: messaging.knative.dev/v1 kind: Channel name: messages subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: ceq-printer uri: /channels/messages",
"./mvnw -ntp package",
"./mvnw -ntp package -Dquarkus.openshift.deploy=true",
"logs -f `oc get pod -l app.kubernetes.io/name=ceq-feed -oname`",
"INFO exec -a \"java\" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp \".\" -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.mai.BaseMainSupport] (main) Auto-configuration summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] camel.component.knative.environmentPath=classpath:knative.json [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [OS Environment Variable] k.sink=http://hello-kn-channel.cmiranda-camel.svc.cluster.local [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:1) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (timer://clock) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 43ms (build:0ms init:0ms start:43ms) [io.quarkus] (main) ceq-feed 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.386s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cloudevents, camel-core, camel-knative, camel-platform-http, camel-rest, camel-rest-openapi, camel-timer, cdi, kubernetes, smallrye-context-propagation, vertx] [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:54:41 UTC 2024 [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:54:56 UTC 2024 [route1] (Camel (camel-1) thread #1 - timer://clock) sent message to hello channel: Hello World from Camel - Thu Aug 01 13:55:11 UTC 2024",
"INFO exec -a \"java\" java -Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager -cp \".\" -jar /deployments/quarkus-run.jar INFO running in /deployments __ ____ __ _____ ___ __ ____ ______ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / __/ -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/___/ [org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.mai.BaseMainSupport] (main) Auto-configuration summary [org.apa.cam.mai.BaseMainSupport] (main) [MicroProfilePropertiesSource] camel.component.knative.environmentPath=classpath:knative.json [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:1) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started route1 (knative://channel/hello) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 10ms (build:0ms init:0ms start:10ms) [io.quarkus] (main) ceq-printer 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.211s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cloudevents, camel-core, camel-knative, camel-log, camel-platform-http, camel-rest, camel-rest-openapi, cdi, kubernetes, smallrye-context-propagation, vertx] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:54:41 UTC 2024] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:54:56 UTC 2024] [info] (executor-thread-1) Exchange[ExchangePattern: InOnly, BodyType: String, Body: Hello World from Camel - Thu Aug 01 13:55:11 UTC 2024]",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: sample spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: period: 5000 contentType: application/json message: '{\"id\":\"1\",\"field\":\"hello\",\"message\":\"Camel Rocks\"}' steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: extract-field-action properties: field: \"message\" sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1 name: log-sink properties: showStreams: true",
"camel export --runtime=quarkus --quarkus-group-id=com.redhat.quarkus.platform --quarkus-version=3.8.5.redhat-00003 --repos=https://maven.repository.redhat.com/ga --dep=io.quarkus:quarkus-openshift --gav=com.mycompany:ceq-timer2log-kbind:1.0 --dir=ceq-timer2log-kbind timer-2-log-kbind.yaml",
"cd ceq-timer2log-kbind",
"quarkus.native.resources.includes camel.main.routes-include-pattern",
"echo quarkus.openshift.build-strategy=docker >> src/main/resources/application.properties",
"./mvnw -ntp package",
"./mvnw -ntp package -Dquarkus.openshift.deploy=true",
"logs -f `oc get pod -l app.kubernetes.io/name=ceq-timer2log-kbind -oname`",
"[org.apa.cam.qua.cor.CamelBootstrapRecorder] (main) Bootstrap runtime: org.apache.camel.quarkus.main.CamelMainRuntime [org.apa.cam.mai.MainSupport] (main) Apache Camel (Main) 4.4.0.redhat-00025 is starting [org.apa.cam.cli.con.LocalCliConnector] (main) Management from Camel JBang enabled [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) is starting [org.apa.cam.mai.BaseMainSupport] (main) Property-placeholders summary [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] period=5000 [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] message={\"id\":\"1\",\"field\":\"hello\",\"message\":\"Camel Rocks\"} [org.apa.cam.mai.BaseMainSupport] (main) [timer-source.kamelet.yaml] contentType=application/json [org.apa.cam.mai.BaseMainSupport] (main) [log-sink.kamelet.yaml] showStreams=true [org.apa.cam.mai.BaseMainSupport] (main) [ct-field-action.kamelet.yaml] extractField=extractField-1 [org.apa.cam.mai.BaseMainSupport] (main) [ct-field-action.kamelet.yaml] field=message [org.apa.cam.imp.eng.AbstractCamelContext] (main) Routes startup (started:4) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started sample (kamelet://timer-source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started timer-source-1 (timer://tick) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started log-sink-2 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Started extract-field-action-3 (kamelet://source) [org.apa.cam.imp.eng.AbstractCamelContext] (main) Apache Camel 4.4.0.redhat-00025 (camel-1) started in 276ms (build:0ms init:0ms start:276ms) [io.quarkus] (main) ceq-timer2log-kbind 1.0 on JVM (powered by Quarkus 3.8.5.redhat-00004) started in 1.867s. Listening on: http://0.0.0.0:8080 [io.quarkus] (main) Profile prod activated. [io.quarkus] (main) Installed features: [camel-attachments, camel-cli-connector, camel-console, camel-core, camel-direct, camel-jackson, camel-kamelet, camel-log, camel-management, camel-microprofile-health, camel-platform-http, camel-rest, camel-rest-openapi, camel-timer, camel-xml-jaxb, camel-yaml-dsl, cdi, kubernetes, smallrye-context-propagation, smallrye-health, vertx] [log-sink] (Camel (camel-1) thread #2 - timer://tick) Exchange[ExchangePattern: InOnly, BodyType: org.apache.camel.converter.stream.InputStreamCache, Body: \"Camel Rocks\"]",
"delete -f target/kubernetes/openshift.yml",
"quarkus.openshift.deployment-kind=CronJob quarkus.openshift.cron-job.schedule=<your cron schedule> camel.main.duration-max-idle-seconds=1",
"from(\"timer:java?delay=0&period=1&repeatCount=1\")",
"camel export --ignore-loading-error <parameters>",
"quarkus.log.category.\"org.apache.camel.component.knative\".level=debug",
"Get \"http://127.0.0.1:8080/q/health/ready\": dial tcp 127.0.0.1:8080: connect: connection refused",
"<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-microprofile-health</artifactId> </dependency>",
"rest() .post(\"/message\") .id(\"rest\") .to(\"direct:foo\"); from(\"direct:foo\") .log(\"hello\");",
"<dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-direct</artifactId> </dependency>",
"PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target",
"quarkus.kubernetes-client.trust-certs=true",
"from(\"direct:jms\") .to(\"kamelet:jms-amqp-10-sink?remoteURI={{broker}}&destinationName={{queue}}\");",
"org.apache.camel.RuntimeCamelException: org.apache.camel.VetoCamelContextStartException: Failure creating route from template: jms-amqp-10-sink Caused by: org.apache.camel.VetoCamelContextStartException: Failure creating route from template: jms-amqp-10-sink Caused by: org.apache.camel.component.kamelet.FailedToCreateKameletException: Error creating or loading Kamelet with id jms-amqp-10-sink (locations: classpath:kamelets,github:apache:camel-kamelets/kamelets) Caused by: org.apache.camel.FailedToCreateRouteException: Failed to create route jms-amqp-10-sink-1 at: >>> To[jms:{{destinationType}}:{{destinationName}}?connectionFactory=#bean:{{connectionFactoryBean}}] Caused by: org.apache.camel.ResolveEndpointFailedException: Failed to resolve endpoint: jms://Queue:USD%7Bqueue%7D?connectionFactory=%23bean%3AconnectionFactoryBean-1 due to: Error binding property (connectionFactory=#bean:connectionFactoryBean-1) Caused by: org.apache.camel.PropertyBindingException: Error binding property (connectionFactory=#bean:connectionFactoryBean-1) with name: connectionFactory on bean: Caused by: java.lang.IllegalStateException: Cannot create bean: #class:org.apache.qpid.jms.JmsConnectionFactory Caused by: org.apache.camel.PropertyBindingException: Error binding property (remoteURI=@@[broker]@@) with name: remoteURI on bean: org.apache.qpid.jms.JmsConnectionFactory@a2b54e3 with value: @@[broker]@@ Caused by: java.lang.IllegalArgumentException: Invalid remote URI: @@[broker]@@ Caused by: java.net.URISyntaxException: Illegal character in path at index 2: @@[broker]@@"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/migration_guide_camel_k_to_camel_extensions_for_quarkus/overview |
Chapter 2. Preparing the hub cluster for GitOps ZTP | Chapter 2. Preparing the hub cluster for GitOps ZTP To use RHACM in a disconnected environment, create a mirror registry that mirrors the OpenShift Container Platform release images and Operator Lifecycle Manager (OLM) catalog that contains the required Operator images. OLM manages, installs, and upgrades Operators and their dependencies in the cluster. You can also use a disconnected mirror host to serve the RHCOS ISO and RootFS disk images that are used to provision the bare-metal hosts. 2.1. Telco RAN DU 4.16 validated software components The Red Hat telco RAN DU 4.16 solution has been validated using the following Red Hat software products for OpenShift Container Platform managed clusters and hub clusters. Table 2.1. Telco RAN DU managed cluster validated software components Component Software version Managed cluster version 4.16 Cluster Logging Operator 5.9 Local Storage Operator 4.16 PTP Operator 4.16 SRIOV Operator 4.16 Node Tuning Operator 4.16 Logging Operator 4.16 SRIOV-FEC Operator 2.9 Table 2.2. Hub cluster validated software components Component Software version Hub cluster version 4.16 GitOps ZTP plugin 4.16 Red Hat Advanced Cluster Management (RHACM) 2.10, 2.11 Red Hat OpenShift GitOps 1.12 Topology Aware Lifecycle Manager (TALM) 4.16 2.2. Recommended hub cluster specifications and managed cluster limits for GitOps ZTP With GitOps Zero Touch Provisioning (ZTP), you can manage thousands of clusters in geographically dispersed regions and networks. The Red Hat Performance and Scale lab successfully created and managed 3500 virtual single-node OpenShift clusters with a reduced DU profile from a single Red Hat Advanced Cluster Management (RHACM) hub cluster in a lab environment. In real-world situations, the scaling limits for the number of clusters that you can manage will vary depending on various factors affecting the hub cluster. For example: Hub cluster resources Available hub cluster host resources (CPU, memory, storage) are an important factor in determining how many clusters the hub cluster can manage. The more resources allocated to the hub cluster, the more managed clusters it can accommodate. Hub cluster storage The hub cluster host storage IOPS rating and whether the hub cluster hosts use NVMe storage can affect hub cluster performance and the number of clusters it can manage. Network bandwidth and latency Slow or high-latency network connections between the hub cluster and managed clusters can impact how the hub cluster manages multiple clusters. Managed cluster size and complexity The size and complexity of the managed clusters also affects the capacity of the hub cluster. Larger managed clusters with more nodes, namespaces, and resources require additional processing and management resources. Similarly, clusters with complex configurations such as the RAN DU profile or diverse workloads can require more resources from the hub cluster. Number of managed policies The number of policies managed by the hub cluster scaled over the number of managed clusters bound to those policies is an important factor that determines how many clusters can be managed. Monitoring and management workloads RHACM continuously monitors and manages the managed clusters. The number and complexity of monitoring and management workloads running on the hub cluster can affect its capacity. Intensive monitoring or frequent reconciliation operations can require additional resources, potentially limiting the number of manageable clusters. RHACM version and configuration Different versions of RHACM can have varying performance characteristics and resource requirements. Additionally, the configuration settings of RHACM, such as the number of concurrent reconciliations or the frequency of health checks, can affect the managed cluster capacity of the hub cluster. Use the following representative configuration and network specifications to develop your own Hub cluster and network specifications. Important The following guidelines are based on internal lab benchmark testing only and do not represent complete bare-metal host specifications. Table 2.3. Representative three-node hub cluster machine specifications Requirement Description Server hardware 3 x Dell PowerEdge R650 rack servers NVMe hard disks 50 GB disk for /var/lib/etcd 2.9 TB disk for /var/lib/containers SSD hard disks 1 SSD split into 15 200GB thin-provisioned logical volumes provisioned as PV CRs 1 SSD serving as an extra large PV resource Number of applied DU profile policies 5 Important The following network specifications are representative of a typical real-world RAN network and were applied to the scale lab environment during testing. Table 2.4. Simulated lab environment network specifications Specification Description Round-trip time (RTT) latency 50 ms Packet loss 0.02% packet loss Network bandwidth limit 20 Mbps Additional resources Creating and managing single-node OpenShift clusters with RHACM 2.3. Installing GitOps ZTP in a disconnected environment Use Red Hat Advanced Cluster Management (RHACM), Red Hat OpenShift GitOps, and Topology Aware Lifecycle Manager (TALM) on the hub cluster in the disconnected environment to manage the deployment of multiple managed clusters. Prerequisites You have installed the OpenShift Container Platform CLI ( oc ). You have logged in as a user with cluster-admin privileges. You have configured a disconnected mirror registry for use in the cluster. Note The disconnected mirror registry that you create must contain a version of TALM backup and pre-cache images that matches the version of TALM running in the hub cluster. The spoke cluster must be able to resolve these images in the disconnected mirror registry. Procedure Install RHACM in the hub cluster. See Installing RHACM in a disconnected environment . Install GitOps and TALM in the hub cluster. Additional resources Installing OpenShift GitOps Installing TALM Mirroring an Operator catalog 2.4. Adding RHCOS ISO and RootFS images to the disconnected mirror host Before you begin installing clusters in the disconnected environment with Red Hat Advanced Cluster Management (RHACM), you must first host Red Hat Enterprise Linux CoreOS (RHCOS) images for it to use. Use a disconnected mirror to host the RHCOS images. Prerequisites Deploy and configure an HTTP server to host the RHCOS image resources on the network. You must be able to access the HTTP server from your computer, and from the machines that you create. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. You require ISO and RootFS images to install RHCOS on the hosts. RHCOS QCOW2 images are not supported for this installation type. Procedure Log in to the mirror host. Obtain the RHCOS ISO and RootFS images from mirror.openshift.com , for example: Export the required image names and OpenShift Container Platform version as environment variables: USD export ISO_IMAGE_NAME=<iso_image_name> 1 USD export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1 USD export OCP_VERSION=<ocp_version> 1 1 ISO image name, for example, rhcos-4.16.1-x86_64-live.x86_64.iso 1 RootFS image name, for example, rhcos-4.16.1-x86_64-live-rootfs.x86_64.img 1 OpenShift Container Platform version, for example, 4.16.1 Download the required images: USD sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.16/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME} USD sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.16/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME} Verification steps Verify that the images downloaded successfully and are being served on the disconnected mirror host, for example: USD wget http://USD(hostname)/USD{ISO_IMAGE_NAME} Example output Saving to: rhcos-4.16.1-x86_64-live.x86_64.iso rhcos-4.16.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s Additional resources Creating a mirror registry Mirroring images for a disconnected installation 2.5. Enabling the assisted service Red Hat Advanced Cluster Management (RHACM) uses the assisted service to deploy OpenShift Container Platform clusters. The assisted service is deployed automatically when you enable the MultiClusterHub Operator on Red Hat Advanced Cluster Management (RHACM). After that, you need to configure the Provisioning resource to watch all namespaces and to update the AgentServiceConfig custom resource (CR) with references to the ISO and RootFS images that are hosted on the mirror registry HTTP server. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have RHACM with MultiClusterHub enabled. Procedure Enable the Provisioning resource to watch all namespaces and configure mirrors for disconnected environments. For more information, see Enabling the central infrastructure management service . Update the AgentServiceConfig CR by running the following command: USD oc edit AgentServiceConfig Add the following entry to the items.spec.osImages field in the CR: - cpuArchitecture: x86_64 openshiftVersion: "4.16" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<host>/<path>/rhcos-live.x86_64.iso where: <host> Is the fully qualified domain name (FQDN) for the target mirror registry HTTP server. <path> Is the path to the image on the target mirror registry. Save and quit the editor to apply the changes. 2.6. Configuring the hub cluster to use a disconnected mirror registry You can configure the hub cluster to use a disconnected mirror registry for a disconnected environment. Prerequisites You have a disconnected hub cluster installation with Red Hat Advanced Cluster Management (RHACM) 2.11 installed. You have hosted the rootfs and iso images on an HTTP server. See the Additional resources section for guidance about Mirroring the OpenShift Container Platform image repository . Warning If you enable TLS for the HTTP server, you must confirm the root certificate is signed by an authority trusted by the client and verify the trusted certificate chain between your OpenShift Container Platform hub and managed clusters and the HTTP server. Using a server configured with an untrusted certificate prevents the images from being downloaded to the image creation service. Using untrusted HTTPS servers is not supported. Procedure Create a ConfigMap containing the mirror registry config: apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: multicluster-engine 1 labels: app: assisted-service data: ca-bundle.crt: | 2 -----BEGIN CERTIFICATE----- <certificate_contents> -----END CERTIFICATE----- registries.conf: | 3 unqualified-search-registries = ["registry.access.redhat.com", "docker.io"] [[registry]] prefix = "" location = "quay.io/example-repository" 4 mirror-by-digest-only = true [[registry.mirror]] location = "mirror1.registry.corp.com:5000/example-repository" 5 1 The ConfigMap namespace must be set to multicluster-engine . 2 The mirror registry's certificate that is used when creating the mirror registry. 3 The configuration file for the mirror registry. The mirror registry configuration adds mirror information to the /etc/containers/registries.conf file in the discovery image. The mirror information is stored in the imageContentSources section of the install-config.yaml file when the information is passed to the installation program. The Assisted Service pod that runs on the hub cluster fetches the container images from the configured mirror registry. 4 The URL of the mirror registry. You must use the URL from the imageContentSources section by running the oc adm release mirror command when you configure the mirror registry. For more information, see the Mirroring the OpenShift Container Platform image repository section. 5 The registries defined in the registries.conf file must be scoped by repository, not by registry. In this example, both the quay.io/example-repository and the mirror1.registry.corp.com:5000/example-repository repositories are scoped by the example-repository repository. This updates mirrorRegistryRef in the AgentServiceConfig custom resource, as shown below: Example output apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: multicluster-engine 1 spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: assisted-installer-mirror-config 2 osImages: - openshiftVersion: <ocp_version> 3 url: <iso_url> 4 1 Set the AgentServiceConfig namespace to multicluster-engine to match the ConfigMap namespace. 2 Set mirrorRegistryRef.name to match the definition specified in the related ConfigMap CR. 3 Set the OpenShift Container Platform version to either the x.y or x.y.z format. 4 Set the URL for the ISO hosted on the httpd server. Important A valid NTP server is required during cluster installation. Ensure that a suitable NTP server is available and can be reached from the installed clusters through the disconnected network. Additional resources Mirroring the OpenShift Container Platform image repository 2.7. Configuring the hub cluster to use unauthenticated registries You can configure the hub cluster to use unauthenticated registries. Unauthenticated registries does not require authentication to access and download images. Prerequisites You have installed and configured a hub cluster and installed Red Hat Advanced Cluster Management (RHACM) on the hub cluster. You have installed the OpenShift Container Platform CLI (oc). You have logged in as a user with cluster-admin privileges. You have configured an unauthenticated registry for use with the hub cluster. Procedure Update the AgentServiceConfig custom resource (CR) by running the following command: USD oc edit AgentServiceConfig agent Add the unauthenticatedRegistries field in the CR: apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: unauthenticatedRegistries: - example.registry.com - example.registry2.com ... Unauthenticated registries are listed under spec.unauthenticatedRegistries in the AgentServiceConfig resource. Any registry on this list is not required to have an entry in the pull secret used for the spoke cluster installation. assisted-service validates the pull secret by making sure it contains the authentication information for every image registry used for installation. Note Mirror registries are automatically added to the ignore list and do not need to be added under spec.unauthenticatedRegistries . Specifying the PUBLIC_CONTAINER_REGISTRIES environment variable in the ConfigMap overrides the default values with the specified value. The PUBLIC_CONTAINER_REGISTRIES defaults are quay.io and registry.svc.ci.openshift.org . Verification Verify that you can access the newly added registry from the hub cluster by running the following commands: Open a debug shell prompt to the hub cluster: USD oc debug node/<node_name> Test access to the unauthenticated registry by running the following command: sh-4.4# podman login -u kubeadmin -p USD(oc whoami -t) <unauthenticated_registry> where: <unauthenticated_registry> Is the new registry, for example, unauthenticated-image-registry.openshift-image-registry.svc:5000 . Example output Login Succeeded! 2.8. Configuring the hub cluster with ArgoCD You can configure the hub cluster with a set of ArgoCD applications that generate the required installation and policy custom resources (CRs) for each site with GitOps Zero Touch Provisioning (ZTP). Note Red Hat Advanced Cluster Management (RHACM) uses SiteConfig CRs to generate the Day 1 managed cluster installation CRs for ArgoCD. Each ArgoCD application can manage a maximum of 300 SiteConfig CRs. Prerequisites You have a OpenShift Container Platform hub cluster with Red Hat Advanced Cluster Management (RHACM) and Red Hat OpenShift GitOps installed. You have extracted the reference deployment from the GitOps ZTP plugin container as described in the "Preparing the GitOps ZTP site configuration repository" section. Extracting the reference deployment creates the out/argocd/deployment directory referenced in the following procedure. Procedure Prepare the ArgoCD pipeline configuration: Create a Git repository with the directory structure similar to the example directory. For more information, see "Preparing the GitOps ZTP site configuration repository". Configure access to the repository using the ArgoCD UI. Under Settings configure the following: Repositories - Add the connection information. The URL must end in .git , for example, https://repo.example.com/repo.git and credentials. Certificates - Add the public certificate for the repository, if needed. Modify the two ArgoCD applications, out/argocd/deployment/clusters-app.yaml and out/argocd/deployment/policies-app.yaml , based on your Git repository: Update the URL to point to the Git repository. The URL ends with .git , for example, https://repo.example.com/repo.git . The targetRevision indicates which Git repository branch to monitor. path specifies the path to the SiteConfig and PolicyGenerator or PolicyGentemplate CRs, respectively. To install the GitOps ZTP plugin, patch the ArgoCD instance in the hub cluster with the relevant multicluster engine (MCE) subscription image. Customize the patch file that you previously extracted into the out/argocd/deployment/ directory for your environment. Select the multicluster-operators-subscription image that matches your RHACM version. For RHACM 2.8 and 2.9, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel8:v<rhacm_version> image. For RHACM 2.10 and later, use the registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v<rhacm_version> image. Important The version of the multicluster-operators-subscription image must match the RHACM version. Beginning with the MCE 2.10 release, RHEL 9 is the base image for multicluster-operators-subscription images. Click [Expand for Operator list] in the "Platform Aligned Operators" table in OpenShift Operator Life Cycles to view the complete supported Operators matrix for OpenShift Container Platform. Modify the out/argocd/deployment/argocd-openshift-gitops-patch.json file with the multicluster-operators-subscription image that matches your RHACM version: { "args": [ "-c", "mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator" 1 ], "command": [ "/bin/bash" ], "image": "registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10", 2 3 "name": "policy-generator-install", "imagePullPolicy": "Always", "volumeMounts": [ { "mountPath": "/.config", "name": "kustomize" } ] } 1 Optional: For RHEL 9 images, copy the required universal executable in the /policy-generator/PolicyGenerator-not-fips-compliant folder for the ArgoCD version. 2 Match the multicluster-operators-subscription image to the RHACM version. 3 In disconnected environments, replace the URL for the multicluster-operators-subscription image with the disconnected registry equivalent for your environment. Patch the ArgoCD instance. Run the following command: USD oc patch argocd openshift-gitops \ -n openshift-gitops --type=merge \ --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json In RHACM 2.7 and later, the multicluster engine enables the cluster-proxy-addon feature by default. Apply the following patch to disable the cluster-proxy-addon feature and remove the relevant hub cluster and managed pods that are responsible for this add-on. Run the following command: USD oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json Apply the pipeline configuration to your hub cluster by running the following command: USD oc apply -k out/argocd/deployment Optional: If you have existing ArgoCD applications, verify that the PrunePropagationPolicy=background policy is set in the Application resource by running the following command: USD oc -n openshift-gitops get applications.argoproj.io \ clusters -o jsonpath='{.spec.syncPolicy.syncOptions}' |jq Example output for an existing policy [ "CreateNamespace=true", "PrunePropagationPolicy=background", "RespectIgnoreDifferences=true" ] If the spec.syncPolicy.syncOption field does not contain a PrunePropagationPolicy parameter or PrunePropagationPolicy is set to the foreground value, set the policy to background in the Application resource. See the following example: kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background Setting the background deletion policy ensures that the ManagedCluster CR and all its associated resources are deleted. 2.9. Preparing the GitOps ZTP site configuration repository Before you can use the GitOps Zero Touch Provisioning (ZTP) pipeline, you need to prepare the Git repository to host the site configuration data. Prerequisites You have configured the hub cluster GitOps applications for generating the required installation and policy custom resources (CRs). You have deployed the managed clusters using GitOps ZTP. Procedure Create a directory structure with separate paths for the SiteConfig and PolicyGenerator or PolicyGentemplate CRs. Note Keep SiteConfig and PolicyGenerator or PolicyGentemplate CRs in separate directories. Both the SiteConfig and PolicyGenerator or PolicyGentemplate directories must contain a kustomization.yaml file that explicitly includes the files in that directory. Export the argocd directory from the ztp-site-generate container image using the following commands: USD podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 USD mkdir -p ./out USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 extract /home/ztp --tar | tar x -C ./out Check that the out directory contains the following subdirectories: out/extra-manifest contains the source CR files that SiteConfig uses to generate extra manifest configMap . out/source-crs contains the source CR files that PolicyGenerator uses to generate the Red Hat Advanced Cluster Management (RHACM) policies. out/argocd/deployment contains patches and YAML files to apply on the hub cluster for use in the step of this procedure. out/argocd/example contains the examples for SiteConfig and PolicyGenerator or PolicyGentemplate files that represent the recommended configuration. Copy the out/source-crs folder and contents to the PolicyGenerator or PolicyGentemplate directory. The out/extra-manifests directory contains the reference manifests for a RAN DU cluster. Copy the out/extra-manifests directory into the SiteConfig folder. This directory should contain CRs from the ztp-site-generate container only. Do not add user-provided CRs here. If you want to work with user-provided CRs you must create another directory for that content. For example: example/ βββ acmpolicygenerator β βββ kustomization.yaml β βββ source-crs/ βββ policygentemplates 1 β βββ kustomization.yaml β βββ source-crs/ βββ siteconfig βββ extra-manifests βββ kustomization.yaml 1 Using PolicyGenTemplate CRs to manage and deploy polices to manage clusters will be deprecated in a future OpenShift Container Platform release. Equivalent and improved functionality is available by using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs. Commit the directory structure and the kustomization.yaml files and push to your Git repository. The initial push to Git should include the kustomization.yaml files. You can use the directory structure under out/argocd/example as a reference for the structure and content of your Git repository. That structure includes SiteConfig and PolicyGenerator or PolicyGentemplate reference CRs for single-node, three-node, and standard clusters. Remove references to cluster types that you are not using. For all cluster types, you must: Add the source-crs subdirectory to the acmpolicygenerator or policygentemplates directory. Add the extra-manifests directory to the siteconfig directory. The following example describes a set of CRs for a network of single-node clusters: example/ βββ acmpolicygenerator β βββ acm-common-ranGen.yaml β βββ acm-example-sno-site.yaml β βββ acm-group-du-sno-ranGen.yaml β βββ group-du-sno-validator-ranGen.yaml β βββ kustomization.yaml β βββ source-crs/ β βββ ns.yaml βββ siteconfig βββ example-sno.yaml βββ extra-manifests/ 1 βββ custom-manifests/ 2 βββ KlusterletAddonConfigOverride.yaml βββ kustomization.yaml 1 Contains reference manifests from the ztp-container . 2 Contains custom manifests. Important Using PolicyGenTemplate CRs to manage and deploy polices to managed clusters will be deprecated in an upcoming OpenShift Container Platform release. Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs. For more information about PolicyGenerator resources, see the RHACM Policy Generator documentation. Additional resources Configuring managed cluster policies by using PolicyGenerator resources Comparing RHACM PolicyGenerator and PolicyGenTemplate resource patching 2.10. Preparing the GitOps ZTP site configuration repository for version independence You can use GitOps ZTP to manage source custom resources (CRs) for managed clusters that are running different versions of OpenShift Container Platform. This means that the version of OpenShift Container Platform running on the hub cluster can be independent of the version running on the managed clusters. Note The following procedure assumes you are using PolicyGenerator resources instead of PolicyGentemplate resources for cluster policies management. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Create a directory structure with separate paths for the SiteConfig and PolicyGenerator CRs. Within the PolicyGenerator directory, create a directory for each OpenShift Container Platform version you want to make available. For each version, create the following resources: kustomization.yaml file that explicitly includes the files in that directory source-crs directory to contain reference CR configuration files from the ztp-site-generate container If you want to work with user-provided CRs, you must create a separate directory for them. In the /siteconfig directory, create a subdirectory for each OpenShift Container Platform version you want to make available. For each version, create at least one directory for reference CRs to be copied from the container. There is no restriction on the naming of directories or on the number of reference directories. If you want to work with custom manifests, you must create a separate directory for them. The following example describes a structure using user-provided manifests and CRs for different versions of OpenShift Container Platform: βββ acmpolicygenerator β βββ kustomization.yaml 1 β βββ version_4.13 2 β β βββ common-ranGen.yaml β β βββ group-du-sno-ranGen.yaml β β βββ group-du-sno-validator-ranGen.yaml β β βββ helix56-v413.yaml β β βββ kustomization.yaml 3 β β βββ ns.yaml β β βββ source-crs/ 4 β β βββ reference-crs/ 5 β β βββ custom-crs/ 6 β βββ version_4.14 7 β βββ common-ranGen.yaml β βββ group-du-sno-ranGen.yaml β βββ group-du-sno-validator-ranGen.yaml β βββ helix56-v414.yaml β βββ kustomization.yaml 8 β βββ ns.yaml β βββ source-crs/ 9 β βββ reference-crs/ 10 β βββ custom-crs/ 11 βββ siteconfig βββ kustomization.yaml βββ version_4.13 β βββ helix56-v413.yaml β βββ kustomization.yaml β βββ extra-manifest/ 12 β βββ custom-manifest/ 13 βββ version_4.14 βββ helix57-v414.yaml βββ kustomization.yaml βββ extra-manifest/ 14 βββ custom-manifest/ 15 1 Create a top-level kustomization YAML file. 2 7 Create the version-specific directories within the custom /acmpolicygenerator directory. 3 8 Create a kustomization.yaml file for each version. 4 9 Create a source-crs directory for each version to contain reference CRs from the ztp-site-generate container. 5 10 Create the reference-crs directory for policy CRs that are extracted from the ZTP container. 6 11 Optional: Create a custom-crs directory for user-provided CRs. 12 14 Create a directory within the custom /siteconfig directory to contain extra manifests from the ztp-site-generate container. 13 15 Create a folder to hold user-provided manifests. Note In the example, each version subdirectory in the custom /siteconfig directory contains two further subdirectories, one containing the reference manifests copied from the container, the other for custom manifests that you provide. The names assigned to those directories are examples. If you use user-provided CRs, the last directory listed under extraManifests.searchPaths in the SiteConfig CR must be the directory containing user-provided CRs. Edit the SiteConfig CR to include the search paths of any directories you have created. The first directory that is listed under extraManifests.searchPaths must be the directory containing the reference manifests. Consider the order in which the directories are listed. In cases where directories contain files with the same name, the file in the final directory takes precedence. Example SiteConfig CR extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2 1 The directory containing the reference manifests must be listed first under extraManifests.searchPaths . 2 If you are using user-provided CRs, the last directory listed under extraManifests.searchPaths in the SiteConfig CR must be the directory containing those user-provided CRs. Edit the top-level kustomization.yaml file to control which OpenShift Container Platform versions are active. The following is an example of a kustomization.yaml file at the top level: resources: - version_4.13 1 #- version_4.14 2 1 Activate version 4.13. 2 Use comments to deactivate a version. | [
"export ISO_IMAGE_NAME=<iso_image_name> 1",
"export ROOTFS_IMAGE_NAME=<rootfs_image_name> 1",
"export OCP_VERSION=<ocp_version> 1",
"sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.16/USD{OCP_VERSION}/USD{ISO_IMAGE_NAME} -O /var/www/html/USD{ISO_IMAGE_NAME}",
"sudo wget https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.16/USD{OCP_VERSION}/USD{ROOTFS_IMAGE_NAME} -O /var/www/html/USD{ROOTFS_IMAGE_NAME}",
"wget http://USD(hostname)/USD{ISO_IMAGE_NAME}",
"Saving to: rhcos-4.16.1-x86_64-live.x86_64.iso rhcos-4.16.1-x86_64-live.x86_64.iso- 11%[====> ] 10.01M 4.71MB/s",
"oc edit AgentServiceConfig",
"- cpuArchitecture: x86_64 openshiftVersion: \"4.16\" rootFSUrl: https://<host>/<path>/rhcos-live-rootfs.x86_64.img url: https://<host>/<path>/rhcos-live.x86_64.iso",
"apiVersion: v1 kind: ConfigMap metadata: name: assisted-installer-mirror-config namespace: multicluster-engine 1 labels: app: assisted-service data: ca-bundle.crt: | 2 -----BEGIN CERTIFICATE----- <certificate_contents> -----END CERTIFICATE----- registries.conf: | 3 unqualified-search-registries = [\"registry.access.redhat.com\", \"docker.io\"] [[registry]] prefix = \"\" location = \"quay.io/example-repository\" 4 mirror-by-digest-only = true [[registry.mirror]] location = \"mirror1.registry.corp.com:5000/example-repository\" 5",
"apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent namespace: multicluster-engine 1 spec: databaseStorage: volumeName: <db_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <db_storage_size> filesystemStorage: volumeName: <fs_pv_name> accessModes: - ReadWriteOnce resources: requests: storage: <fs_storage_size> mirrorRegistryRef: name: assisted-installer-mirror-config 2 osImages: - openshiftVersion: <ocp_version> 3 url: <iso_url> 4",
"oc edit AgentServiceConfig agent",
"apiVersion: agent-install.openshift.io/v1beta1 kind: AgentServiceConfig metadata: name: agent spec: unauthenticatedRegistries: - example.registry.com - example.registry2.com",
"oc debug node/<node_name>",
"sh-4.4# podman login -u kubeadmin -p USD(oc whoami -t) <unauthenticated_registry>",
"Login Succeeded!",
"{ \"args\": [ \"-c\", \"mkdir -p /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator && cp /policy-generator/PolicyGenerator-not-fips-compliant /.config/kustomize/plugin/policy.open-cluster-management.io/v1/policygenerator/PolicyGenerator\" 1 ], \"command\": [ \"/bin/bash\" ], \"image\": \"registry.redhat.io/rhacm2/multicluster-operators-subscription-rhel9:v2.10\", 2 3 \"name\": \"policy-generator-install\", \"imagePullPolicy\": \"Always\", \"volumeMounts\": [ { \"mountPath\": \"/.config\", \"name\": \"kustomize\" } ] }",
"oc patch argocd openshift-gitops -n openshift-gitops --type=merge --patch-file out/argocd/deployment/argocd-openshift-gitops-patch.json",
"oc patch multiclusterengines.multicluster.openshift.io multiclusterengine --type=merge --patch-file out/argocd/deployment/disable-cluster-proxy-addon.json",
"oc apply -k out/argocd/deployment",
"oc -n openshift-gitops get applications.argoproj.io clusters -o jsonpath='{.spec.syncPolicy.syncOptions}' |jq",
"[ \"CreateNamespace=true\", \"PrunePropagationPolicy=background\", \"RespectIgnoreDifferences=true\" ]",
"kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=background",
"podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16",
"mkdir -p ./out",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.16 extract /home/ztp --tar | tar x -C ./out",
"example/ βββ acmpolicygenerator β βββ kustomization.yaml β βββ source-crs/ βββ policygentemplates 1 β βββ kustomization.yaml β βββ source-crs/ βββ siteconfig βββ extra-manifests βββ kustomization.yaml",
"example/ βββ acmpolicygenerator β βββ acm-common-ranGen.yaml β βββ acm-example-sno-site.yaml β βββ acm-group-du-sno-ranGen.yaml β βββ group-du-sno-validator-ranGen.yaml β βββ kustomization.yaml β βββ source-crs/ β βββ ns.yaml βββ siteconfig βββ example-sno.yaml βββ extra-manifests/ 1 βββ custom-manifests/ 2 βββ KlusterletAddonConfigOverride.yaml βββ kustomization.yaml",
"βββ acmpolicygenerator β βββ kustomization.yaml 1 β βββ version_4.13 2 β β βββ common-ranGen.yaml β β βββ group-du-sno-ranGen.yaml β β βββ group-du-sno-validator-ranGen.yaml β β βββ helix56-v413.yaml β β βββ kustomization.yaml 3 β β βββ ns.yaml β β βββ source-crs/ 4 β β βββ reference-crs/ 5 β β βββ custom-crs/ 6 β βββ version_4.14 7 β βββ common-ranGen.yaml β βββ group-du-sno-ranGen.yaml β βββ group-du-sno-validator-ranGen.yaml β βββ helix56-v414.yaml β βββ kustomization.yaml 8 β βββ ns.yaml β βββ source-crs/ 9 β βββ reference-crs/ 10 β βββ custom-crs/ 11 βββ siteconfig βββ kustomization.yaml βββ version_4.13 β βββ helix56-v413.yaml β βββ kustomization.yaml β βββ extra-manifest/ 12 β βββ custom-manifest/ 13 βββ version_4.14 βββ helix57-v414.yaml βββ kustomization.yaml βββ extra-manifest/ 14 βββ custom-manifest/ 15",
"extraManifests: searchPaths: - extra-manifest/ 1 - custom-manifest/ 2",
"resources: - version_4.13 1 #- version_4.14 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/edge_computing/ztp-preparing-the-hub-cluster |
Chapter 34. KIE Server ZIP file installation and configuration | Chapter 34. KIE Server ZIP file installation and configuration You can install KIE Server using the rhpam-7.13.5-kie-server-jws.zip file available from the Red Hat Process Automation Manager 7.13.5 Add Ons ( rhpam-7.13.5-add-ons.zip ) file on the Customer Portal . 34.1. Installing KIE Server from ZIP files KIE Server provides the runtime environment for business assets and accesses the data stored in the assets repository (knowledge store). You can use ZIP files to install KIE Server on an existing Red Hat JBoss Web Server 5.5.1 server instance. Note To use the installer JAR file to install KIE Server, see Chapter 33, Using the Red Hat Process Automation Manager installer . The Red Hat Process Automation Manager 7.13.5 Add Ons ( rhpam-7.13.5-add-ons.zip ) file has been downloaded, as described in Chapter 32, Downloading the Red Hat Process Automation Manager installation files . A backed-up Red Hat JBoss Web Server 5.5.1 server installation is available. The base directory of the Red Hat JBoss Web Server installation is referred to as JWS_HOME . Sufficient user permissions to complete the installation are granted. Procedure Extract the rhpam-7.13.5-add-ons.zip file. From the extracted rhpam-7.13.5-add-ons.zip file, extract the following files: rhpam-7.13.5-kie-server-jws.zip rhpam-7.13.5-process-engine.zip In the following instructions, the directory that contains the extracted rhpam-7.13.5-kie-server-jws.zip file is called JWS_TEMP_DIR and the directory that contains the extracted rhpam-7.13.5-process-engine.zip file is called ENGINE_TEMP_DIR . Copy the JWS_TEMP_DIR/rhpam-7.13.5-kie-server-jws/kie-server.war directory to the JWS_HOME /tomcat/webapps directory. Note Ensure the names of the Red Hat Decision Manager deployments you copy do not conflict with your existing deployments in the Red Hat JBoss Web Server instance. Remove the .war extensions from the kie-server.war folder. Move the kie-tomcat-integration-7.67.0.Final-redhat-00024.jar file from the ENGINE_TEMP_DIR directory to the JWS_HOME /tomcat/lib directory. Move the jboss-jacc-api-<VERSION>.jar , slf4j-api-<VERSION>.jar , and slf4j-jdk14-<VERSION>.jar files from the ENGINE_TEMP_DIR/lib directory to the JWS_HOME /tomcat/lib directory, where <VERSION> is the version artifact file name, in the lib directory. Add the following line to the <host> element in the JWS_HOME /tomcat/conf/server.xml file after the last Valve definition: Open the JWS_HOME /tomcat/conf/tomcat-users.xml file in a text editor. Add users and roles to the JWS_HOME /tomcat/conf/tomcat-users.xml file. In the following example, <ROLE_NAME> is a role supported by Red Hat Decision Manager. <USER_NAME> and <USER_PWD> are the user name and password of your choice: If a user has more than one role, as shown in the following example, separate the roles with a comma: Complete one of the following steps in the JWS_HOME /tomcat/bin directory: On Linux or UNIX, create the setenv.sh file with the following content: On Windows, add the following content to the setenv.bat file: | [
"<Valve className=\"org.kie.integration.tomcat.JACCValve\" />",
"<role rolename=\"<ROLE_NAME>\"/> <user username=\"<USER_NAME>\" password=\"<USER_PWD>\" roles=\"<ROLE_NAME>\"/>",
"<role rolename=\"admin\"/> <role rolename=\"kie-server\"/> <user username=\"rhpamUser\" password=\"user1234\" roles=\"admin,kie-server\"/>",
"CATALINA_OPTS=\"-Xmx1024m -Dorg.jboss.logging.provider=jdk -Dorg.jbpm.server.ext.disabled=true -Dorg.jbpm.ui.server.ext.disabled=true -Dorg.jbpm.case.server.ext.disabled=true\"",
"set CATALINA_OPTS=\"-Xmx1024m -Dorg.jboss.logging.provider=jdk -Dorg.jbpm.server.ext.disabled=true -Dorg.jbpm.ui.server.ext.disabled=true -Dorg.jbpm.case.server.ext.disabled=true"
] | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/kie_server_zip_file_installation_and_configuration |
Chapter 5. Preparing to perform an EUS-to-EUS update | Chapter 5. Preparing to perform an EUS-to-EUS update Due to fundamental Kubernetes design, all OpenShift Container Platform updates between minor versions must be serialized. You must update from OpenShift Container Platform 4.8 to 4.9 and then to 4.10. You cannot update from OpenShift Container Platform 4.8 to 4.10 directly. However, beginning with the update from OpenShift Container Platform 4.8 to 4.9 to 4.10, administrators who wish to update between two Extended Update Support (EUS) versions can do so incurring only a single reboot of non-control plane hosts. There are a number of caveats to consider when attempting an EUS-to-EUS update. EUS to EUS updates are only offered after updates between all versions involved have been made available in stable channels. If you encounter issues during or after upgrading to the odd-numbered minor version but before upgrading to the even-numbered version, then remediation of those issues may require that non-control plane hosts complete the update to the odd-numbered version before moving forward. You can complete the update process during multiple maintenance windows by pausing at intermediate steps. However, plan to complete the entire update within 60 days. This is critical to ensure that normal cluster automation processes are completed including those associated with certificate rotation. You must be running at least OpenShift Container Platform 4.8.14 before starting the EUS-to-EUS update procedure. If you do not meet this minimum requirement, update to a later 4.8.z before attempting the EUS-to-EUS update. Support for RHEL7 workers was removed in OpenShift Container Platform 4.10 and replaced with RHEL8 workers, therefore EUS to EUS updates are not available for clusters with RHEL7 workers. Node components are not updated to OpenShift Container Platform 4.9. Do not expect all features and bugs fixed in OpenShift Container Platform 4.9 to be made available until you complete the update to OpenShift Container Platform 4.10 and enable all MachineConfigPools to update. All the clusters might update using EUS channels for a conventional update without pools paused, but only clusters with non control-plane MachineConfigPools objects can do EUS-to-EUS update with pools paused. 5.1. EUS-to-EUS update The following procedure pauses all non-master MachineConfigPools and performs updates from OpenShift Container Platform 4.8 to 4.9 to 4.10, then unpauses the previously paused MachineConfigPools. Following this procedure reduces the total update duration and the number of times worker nodes are restarted. Prerequisites Review the release notes for OpenShift Container Platform 4.9 and 4.10 Review the release notes and product lifecycles for any layered products and OLM Operators. Some may require updates either before or during an EUS-to-EUS update. Ensure that you are familiar with version-specific prerequisites, such as administrator acknowledgement that is required prior to upgrading from OpenShift Container Platform 4.8 to 4.9. Verify that your cluster is running OpenShift Container Platform version 4.8.14 or later. If your cluster is running a version earlier than OpenShift Container Platform 4.8.14, you must update to a later 4.8.z version before updating to 4.9. The update to 4.8.14 or later is necessary to fulfill the minimum version requirements that must be performed without pausing MachineConfigPools. Verify that MachineConfigPools is unpaused. Procedure Upgrade any OLM Operators to versions that are compatible with both versions you are updating to. Verify that all MachineConfigPools display a status of UPDATED and no MachineConfigPools display a status of UPDATING . View the status of all MachineConfigPools, run the following command: USD oc get mcp Example output Output is trimmed for clarity: NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False Pause the MachineConfigPools you wish to skip reboots on, run the following commands: Note You cannot pause the master pool. USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":true}}' Change to the eus-4.10 channel, run the following command: USD oc adm upgrade channel eus-4.10 Update to 4.9, run the following command: USD oc adm upgrade --to-latest Example output Updating to latest version 4.9.18 Review the cluster version to ensure that the updates are complete by running the following command: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.18 True False 6m29s Cluster version is 4.9.18 If necessary, upgrade OLM operators using the Administrator perspective on the web console. Update to 4.10, run the following command: USD oc adm upgrade --to-latest Review the cluster version to ensure that the updates are complete by running the following command: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.1 True False 6m29s Cluster version is 4.10.1 Unpause all previously paused MachineConfigPools, run the following command: USD oc patch mcp/worker --type merge --patch '{"spec":{"paused":false}}' Note If pools are not unpaused, the cluster is not permitted to update to any future minors and maintenance tasks such as certificate rotation are inhibited. This puts the cluster at risk for future degradation. Verify that your previously paused pools have updated and your cluster completed the update to 4.10, run the following command: USD oc get mcp Example output Output is trimmed for clarity: NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False | [
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-ecbb9582781c1091e1c9f19d50cf836c True False worker rendered-worker-00a3f0c68ae94e747193156b491553d5 True False",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":true}}'",
"oc adm upgrade channel eus-4.10",
"oc adm upgrade --to-latest",
"Updating to latest version 4.9.18",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.18 True False 6m29s Cluster version is 4.9.18",
"oc adm upgrade --to-latest",
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.1 True False 6m29s Cluster version is 4.10.1",
"oc patch mcp/worker --type merge --patch '{\"spec\":{\"paused\":false}}'",
"oc get mcp",
"NAME CONFIG UPDATED UPDATING master rendered-master-52da4d2760807cb2b96a3402179a9a4c True False worker rendered-worker-4756f60eccae96fb9dcb4c392c69d497 True False"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/updating_clusters/preparing-eus-eus-upgrade |
Chapter 2. Deploy OpenShift Data Foundation using local storage devices | Chapter 2. Deploy OpenShift Data Foundation using local storage devices Use this section to deploy OpenShift Data Foundation on IBM Power infrastructure where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Find available storage devices . Create an OpenShift Data Foundation cluster on IBM Power . 2.1. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. 2.3. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.4. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.5. Finding available storage devices Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating PVs for IBM Power. Procedure List and verify the name of the worker nodes with the OpenShift Data Foundation label. Example output: Log in to each worker node that is used for OpenShift Data Foundation resources and find the name of the additional disk that you have attached while deploying Openshift Container Platform. Example output: In this example, for worker-0, the available local devices of 500G are sda , sdc , sde , sdg , sdi , sdk , sdm , sdo . Repeat the above step for all the other worker nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details. 2.6. Creating OpenShift Data Foundation cluster on IBM Power Use this procedure to create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have a minimum of three worker nodes with the same storage type and size attached to each node (for example, 200 GB SSD) to use local storage devices on IBM Power. Verify your OpenShift Container Platform worker nodes are labeled for OpenShift Data Foundation: To identify storage devices on each node, refer to Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace Click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for block PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . Confirm whether diskmaker-manager pods and Persistent Volumes are created. For Pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-local-storage from the Project drop-down list. Check if there are diskmaker-manager pods for each of the worker node that you used while creating LocalVolume CR. For Persistent Volumes Click Storage PersistentVolumes from the left pane of the OpenShift Web Console. Check the Persistent Volumes with the name local-pv-* . Number of Persistent Volumes will be equivalent to the product of number of worker nodes and number of storage devices provisioned while creating localVolume CR. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Select the required Storage Class that you used while installing LocalVolume. By default, it is set to none . Optional: Select Use Ceph RBD as the default StorageClass . This avoids having to manually annotate a StorageClass. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Select Default (OVN) network as Multus is not yet supported on OpenShift Data Foundation on IBM Power. Click . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery(Regional-DR only) checkbox, else click . In the Review and create page:: Review the configurations details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide. | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault token create -policy=odf -format json",
"oc -n openshift-storage create serviceaccount <serviceaccount_name>",
"oc -n openshift-storage create serviceaccount odf-vault-auth",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_",
"oc -n openshift-storage create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth",
"cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: openshift-storage annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF",
"SA_JWT_TOKEN=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n openshift-storage get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)",
"OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")",
"oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid",
"vault auth enable kubernetes",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"",
"vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"",
"vault secrets enable -path=odf kv",
"vault secrets enable -path=odf kv-v2",
"echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -",
"vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=openshift-storage policies=odf ttl=1440h",
"oc get nodes -l cluster.ocs.openshift.io/openshift-storage=",
"NAME STATUS ROLES AGE VERSION worker-0 Ready worker 2d11h v1.23.3+e419edf worker-1 Ready worker 2d11h v1.23.3+e419edf worker-2 Ready worker 2d11h v1.23.3+e419edf",
"oc debug node/<node name>",
"oc debug node/worker-0 Starting pod/worker-0-debug To use host binaries, run `chroot /host` Pod IP: 192.168.0.63 If you don't see a command prompt, try pressing enter. sh-4.4# sh-4.4# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop1 7:1 0 500G 0 loop sda 8:0 0 500G 0 disk sdb 8:16 0 120G 0 disk |-sdb1 8:17 0 4M 0 part |-sdb3 8:19 0 384M 0 part `-sdb4 8:20 0 119.6G 0 part sdc 8:32 0 500G 0 disk sdd 8:48 0 120G 0 disk |-sdd1 8:49 0 4M 0 part |-sdd3 8:51 0 384M 0 part `-sdd4 8:52 0 119.6G 0 part sde 8:64 0 500G 0 disk sdf 8:80 0 120G 0 disk |-sdf1 8:81 0 4M 0 part |-sdf3 8:83 0 384M 0 part `-sdf4 8:84 0 119.6G 0 part sdg 8:96 0 500G 0 disk sdh 8:112 0 120G 0 disk |-sdh1 8:113 0 4M 0 part |-sdh3 8:115 0 384M 0 part `-sdh4 8:116 0 119.6G 0 part sdi 8:128 0 500G 0 disk sdj 8:144 0 120G 0 disk |-sdj1 8:145 0 4M 0 part |-sdj3 8:147 0 384M 0 part `-sdj4 8:148 0 119.6G 0 part sdk 8:160 0 500G 0 disk sdl 8:176 0 120G 0 disk |-sdl1 8:177 0 4M 0 part |-sdl3 8:179 0 384M 0 part `-sdl4 8:180 0 119.6G 0 part /sysroot sdm 8:192 0 500G 0 disk sdn 8:208 0 120G 0 disk |-sdn1 8:209 0 4M 0 part |-sdn3 8:211 0 384M 0 part /boot `-sdn4 8:212 0 119.6G 0 part sdo 8:224 0 500G 0 disk sdp 8:240 0 120G 0 disk |-sdp1 8:241 0 4M 0 part |-sdp3 8:243 0 384M 0 part `-sdp4 8:244 0 119.6G 0 part",
"get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{\"\\n\"}'",
"apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Block",
"spec: flexibleScaling: true [...] status: failureDomain: host"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_ibm_power/deploy-using-local-storage-devices-ibm-power |
Chapter 2. Recommended host practices for IBM Z & LinuxONE environments | Chapter 2. Recommended host practices for IBM Z & LinuxONE environments This topic provides recommended host practices for OpenShift Container Platform on IBM Z and LinuxONE. Note The s390x architecture is unique in many aspects. Therefore, some recommendations made here might not apply to other platforms. Note Unless stated otherwise, these practices apply to both z/VM and Red Hat Enterprise Linux (RHEL) KVM installations on IBM Z and LinuxONE. 2.1. Managing CPU overcommitment In a highly virtualized IBM Z environment, you must carefully plan the infrastructure setup and sizing. One of the most important features of virtualization is the capability to do resource overcommitment, allocating more resources to the virtual machines than actually available at the hypervisor level. This is very workload dependent and there is no golden rule that can be applied to all setups. Depending on your setup, consider these best practices regarding CPU overcommitment: At LPAR level (PR/SM hypervisor), avoid assigning all available physical cores (IFLs) to each LPAR. For example, with four physical IFLs available, you should not define three LPARs with four logical IFLs each. Check and understand LPAR shares and weights. An excessive number of virtual CPUs can adversely affect performance. Do not define more virtual processors to a guest than logical processors are defined to the LPAR. Configure the number of virtual processors per guest for peak workload, not more. Start small and monitor the workload. Increase the vCPU number incrementally if necessary. Not all workloads are suitable for high overcommitment ratios. If the workload is CPU intensive, you will probably not be able to achieve high ratios without performance problems. Workloads that are more I/O intensive can keep consistent performance even with high overcommitment ratios. Additional resources z/VM Common Performance Problems and Solutions z/VM overcommitment considerations LPAR CPU management 2.2. Disable Transparent Huge Pages Transparent Huge Pages (THP) attempt to automate most aspects of creating, managing, and using huge pages. Since THP automatically manages the huge pages, this is not always handled optimally for all types of workloads. THP can lead to performance regressions, since many applications handle huge pages on their own. Therefore, consider disabling THP. 2.3. Boost networking performance with Receive Flow Steering Receive Flow Steering (RFS) extends Receive Packet Steering (RPS) by further reducing network latency. RFS is technically based on RPS, and improves the efficiency of packet processing by increasing the CPU cache hit rate. RFS achieves this, and in addition considers queue length, by determining the most convenient CPU for computation so that cache hits are more likely to occur within the CPU. Thus, the CPU cache is invalidated less and requires fewer cycles to rebuild the cache. This can help reduce packet processing run time. 2.3.1. Use the Machine Config Operator (MCO) to activate RFS Procedure Copy the following MCO sample profile into a YAML file. For example, enable-rfs.yaml : apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-enable-rfs spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;charset=US-ASCII,%23%20turn%20on%20Receive%20Flow%20Steering%20%28RFS%29%20for%20all%20network%20interfaces%0ASUBSYSTEM%3D%3D%22net%22%2C%20ACTION%3D%3D%22add%22%2C%20RUN%7Bprogram%7D%2B%3D%22/bin/bash%20-c%20%27for%20x%20in%20/sys/%24DEVPATH/queues/rx-%2A%3B%20do%20echo%208192%20%3E%20%24x/rps_flow_cnt%3B%20%20done%27%22%0A filesystem: root mode: 0644 path: /etc/udev/rules.d/70-persistent-net.rules - contents: source: data:text/plain;charset=US-ASCII,%23%20define%20sock%20flow%20enbtried%20for%20%20Receive%20Flow%20Steering%20%28RFS%29%0Anet.core.rps_sock_flow_entries%3D8192%0A filesystem: root mode: 0644 path: /etc/sysctl.d/95-enable-rps.conf Create the MCO profile: USD oc create -f enable-rfs.yaml Verify that an entry named 50-enable-rfs is listed: USD oc get mc To deactivate, enter: USD oc delete mc 50-enable-rfs Additional resources OpenShift Container Platform on IBM Z: Tune your network performance with RFS Configuring Receive Flow Steering (RFS) Scaling in the Linux Networking Stack 2.4. Choose your networking setup The networking stack is one of the most important components for a Kubernetes-based product like OpenShift Container Platform. For IBM Z setups, the networking setup depends on the hypervisor of your choice. Depending on the workload and the application, the best fit usually changes with the use case and the traffic pattern. Depending on your setup, consider these best practices: Consider all options regarding networking devices to optimize your traffic pattern. Explore the advantages of OSA-Express, RoCE Express, HiperSockets, z/VM VSwitch, Linux Bridge (KVM), and others to decide which option leads to the greatest benefit for your setup. Always use the latest available NIC version. For example, OSA Express 7S 10 GbE shows great improvement compared to OSA Express 6S 10 GbE with transactional workload types, although both are 10 GbE adapters. Each virtual switch adds an additional layer of latency. The load balancer plays an important role for network communication outside the cluster. Consider using a production-grade hardware load balancer if this is critical for your application. OpenShift Container Platform SDN introduces flows and rules, which impact the networking performance. Make sure to consider pod affinities and placements, to benefit from the locality of services where communication is critical. Balance the trade-off between performance and functionality. Additional resources OpenShift Container Platform on IBM Z - Performance Experiences, Hints and Tips OpenShift Container Platform on IBM Z Networking Performance Controlling pod placement on nodes using node affinity rules 2.5. Ensure high disk performance with HyperPAV on z/VM DASD and ECKD devices are commonly used disk types in IBM Z environments. In a typical OpenShift Container Platform setup in z/VM environments, DASD disks are commonly used to support the local storage for the nodes. You can set up HyperPAV alias devices to provide more throughput and overall better I/O performance for the DASD disks that support the z/VM guests. Using HyperPAV for the local storage devices leads to a significant performance benefit. However, you must be aware that there is a trade-off between throughput and CPU costs. 2.5.1. Use the Machine Config Operator (MCO) to activate HyperPAV aliases in nodes using z/VM full-pack minidisks For z/VM-based OpenShift Container Platform setups that use full-pack minidisks, you can leverage the advantage of MCO profiles by activating HyperPAV aliases in all of the nodes. You must add YAML configurations for both control plane and compute nodes. Procedure Copy the following MCO sample profile into a YAML file for the control plane node. For example, 05-master-kernelarg-hpav.yaml : USD cat 05-master-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-master-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805 Copy the following MCO sample profile into a YAML file for the compute node. For example, 05-worker-kernelarg-hpav.yaml : USD cat 05-worker-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805 Note You must modify the rd.dasd arguments to fit the device IDs. Create the MCO profiles: USD oc create -f 05-master-kernelarg-hpav.yaml USD oc create -f 05-worker-kernelarg-hpav.yaml To deactivate, enter: USD oc delete -f 05-master-kernelarg-hpav.yaml USD oc delete -f 05-worker-kernelarg-hpav.yaml Additional resources Using HyperPAV for ECKD DASD Scaling HyperPAV alias devices on Linux guests on z/VM 2.6. RHEL KVM on IBM Z host recommendations Optimizing a KVM virtual server environment strongly depends on the workloads of the virtual servers and on the available resources. The same action that enhances performance in one environment can have adverse effects in another. Finding the best balance for a particular setting can be a challenge and often involves experimentation. The following section introduces some best practices when using OpenShift Container Platform with RHEL KVM on IBM Z and LinuxONE environments. 2.6.1. Use I/O threads for your virtual block devices To make virtual block devices use I/O threads, you must configure one or more I/O threads for the virtual server and each virtual block device to use one of these I/O threads. The following example specifies <iothreads>3</iothreads> to configure three I/O threads, with consecutive decimal thread IDs 1, 2, and 3. The iothread="2" parameter specifies the driver element of the disk device to use the I/O thread with ID 2. Sample I/O thread specification ... <domain> <iothreads>3</iothreads> 1 ... <devices> ... <disk type="block" device="disk"> 2 <driver ... iothread="2"/> </disk> ... </devices> ... </domain> 1 The number of I/O threads. 2 The driver element of the disk device. Threads can increase the performance of I/O operations for disk devices, but they also use memory and CPU resources. You can configure multiple devices to use the same thread. The best mapping of threads to devices depends on the available resources and the workload. Start with a small number of I/O threads. Often, a single I/O thread for all disk devices is sufficient. Do not configure more threads than the number of virtual CPUs, and do not configure idle threads. You can use the virsh iothreadadd command to add I/O threads with specific thread IDs to a running virtual server. 2.6.2. Avoid virtual SCSI devices Configure virtual SCSI devices only if you need to address the device through SCSI-specific interfaces. Configure disk space as virtual block devices rather than virtual SCSI devices, regardless of the backing on the host. However, you might need SCSI-specific interfaces for: A LUN for a SCSI-attached tape drive on the host. A DVD ISO file on the host file system that is mounted on a virtual DVD drive. 2.6.3. Configure guest caching for disk Configure your disk devices to do caching by the guest and not by the host. Ensure that the driver element of the disk device includes the cache="none" and io="native" parameters. <disk type="block" device="disk"> <driver name="qemu" type="raw" cache="none" io="native" iothread="1"/> ... </disk> 2.6.4. Exclude the memory balloon device Unless you need a dynamic memory size, do not define a memory balloon device and ensure that libvirt does not create one for you. Include the memballoon parameter as a child of the devices element in your domain configuration XML file. Check the list of active profiles: <memballoon model="none"/> 2.6.5. Tune the CPU migration algorithm of the host scheduler Important Do not change the scheduler settings unless you are an expert who understands the implications. Do not apply changes to production systems without testing them and confirming that they have the intended effect. The kernel.sched_migration_cost_ns parameter specifies a time interval in nanoseconds. After the last execution of a task, the CPU cache is considered to have useful content until this interval expires. Increasing this interval results in fewer task migrations. The default value is 500000 ns. If the CPU idle time is higher than expected when there are runnable processes, try reducing this interval. If tasks bounce between CPUs or nodes too often, try increasing it. To dynamically set the interval to 60000 ns, enter the following command: # sysctl kernel.sched_migration_cost_ns=60000 To persistently change the value to 60000 ns, add the following entry to /etc/sysctl.conf : kernel.sched_migration_cost_ns=60000 2.6.6. Disable the cpuset cgroup controller Note This setting applies only to KVM hosts with cgroups version 1. To enable CPU hotplug on the host, disable the cgroup controller. Procedure Open /etc/libvirt/qemu.conf with an editor of your choice. Go to the cgroup_controllers line. Duplicate the entire line and remove the leading number sign (#) from the copy. Remove the cpuset entry, as follows: cgroup_controllers = [ "cpu", "devices", "memory", "blkio", "cpuacct" ] For the new setting to take effect, you must restart the libvirtd daemon: Stop all virtual machines. Run the following command: # systemctl restart libvirtd Restart the virtual machines. This setting persists across host reboots. 2.6.7. Tune the polling period for idle virtual CPUs When a virtual CPU becomes idle, KVM polls for wakeup conditions for the virtual CPU before allocating the host resource. You can specify the time interval, during which polling takes place in sysfs at /sys/module/kvm/parameters/halt_poll_ns . During the specified time, polling reduces the wakeup latency for the virtual CPU at the expense of resource usage. Depending on the workload, a longer or shorter time for polling can be beneficial. The time interval is specified in nanoseconds. The default is 50000 ns. To optimize for low CPU consumption, enter a small value or write 0 to disable polling: # echo 0 > /sys/module/kvm/parameters/halt_poll_ns To optimize for low latency, for example for transactional workloads, enter a large value: # echo 80000 > /sys/module/kvm/parameters/halt_poll_ns Additional resources Linux on IBM Z Performance Tuning for KVM Getting started with virtualization on IBM Z | [
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 50-enable-rfs spec: config: ignition: version: 2.2.0 storage: files: - contents: source: data:text/plain;charset=US-ASCII,%23%20turn%20on%20Receive%20Flow%20Steering%20%28RFS%29%20for%20all%20network%20interfaces%0ASUBSYSTEM%3D%3D%22net%22%2C%20ACTION%3D%3D%22add%22%2C%20RUN%7Bprogram%7D%2B%3D%22/bin/bash%20-c%20%27for%20x%20in%20/sys/%24DEVPATH/queues/rx-%2A%3B%20do%20echo%208192%20%3E%20%24x/rps_flow_cnt%3B%20%20done%27%22%0A filesystem: root mode: 0644 path: /etc/udev/rules.d/70-persistent-net.rules - contents: source: data:text/plain;charset=US-ASCII,%23%20define%20sock%20flow%20enbtried%20for%20%20Receive%20Flow%20Steering%20%28RFS%29%0Anet.core.rps_sock_flow_entries%3D8192%0A filesystem: root mode: 0644 path: /etc/sysctl.d/95-enable-rps.conf",
"oc create -f enable-rfs.yaml",
"oc get mc",
"oc delete mc 50-enable-rfs",
"cat 05-master-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 05-master-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805",
"cat 05-worker-kernelarg-hpav.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-hpav spec: config: ignition: version: 3.1.0 kernelArguments: - rd.dasd=800-805",
"oc create -f 05-master-kernelarg-hpav.yaml",
"oc create -f 05-worker-kernelarg-hpav.yaml",
"oc delete -f 05-master-kernelarg-hpav.yaml",
"oc delete -f 05-worker-kernelarg-hpav.yaml",
"<domain> <iothreads>3</iothreads> 1 <devices> <disk type=\"block\" device=\"disk\"> 2 <driver ... iothread=\"2\"/> </disk> </devices> </domain>",
"<disk type=\"block\" device=\"disk\"> <driver name=\"qemu\" type=\"raw\" cache=\"none\" io=\"native\" iothread=\"1\"/> </disk>",
"<memballoon model=\"none\"/>",
"sysctl kernel.sched_migration_cost_ns=60000",
"kernel.sched_migration_cost_ns=60000",
"cgroup_controllers = [ \"cpu\", \"devices\", \"memory\", \"blkio\", \"cpuacct\" ]",
"systemctl restart libvirtd",
"echo 0 > /sys/module/kvm/parameters/halt_poll_ns",
"echo 80000 > /sys/module/kvm/parameters/halt_poll_ns"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/scalability_and_performance/ibm-z-recommended-host-practices |
14.8.3. make_unicodemap | 14.8.3. make_unicodemap make_unicodemap <codepage_number> <inputfile> <outputfile> The make_unicodemap program compiles binary Unicode files from text files so Samba can display non-ASCII charactersets. This obsolete program was part of the internationalization features of versions of Samba which are now included with the current release of Samba. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-programs-make_unicodemap |
Chapter 47. Bean Validator Component | Chapter 47. Bean Validator Component Available as of Camel version 2.3 The Validator component performs bean validation of the message body using the Java Bean Validation API ( JSR 303 ). Camel uses the reference implementation, which is Hibernate Validator . Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-bean-validator</artifactId> <version>x.y.z</version> <!-- use the same version as your Camel core version --> </dependency> 47.1. URI format bean-validator:label[?options] or bean-validator://label[?options] Where label is an arbitrary text value describing the endpoint. You can append query options to the URI in the following format, ?option=value&option=value&... 47.2. URI Options The Bean Validator component has no options. The Bean Validator endpoint is configured using URI syntax: with the following path and query parameters: 47.2.1. Path Parameters (1 parameters): Name Description Default Type label Required Where label is an arbitrary text value describing the endpoint String 47.2.2. Query Parameters (6 parameters): Name Description Default Type constraintValidatorFactory (producer) To use a custom ConstraintValidatorFactory ConstraintValidator Factory group (producer) To use a custom validation group javax.validation.groups.Default String messageInterpolator (producer) To use a custom MessageInterpolator MessageInterpolator traversableResolver (producer) To use a custom TraversableResolver TraversableResolver validationProviderResolver (producer) To use a a custom ValidationProviderResolver ValidationProvider Resolver synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 47.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.bean-validator.enabled Enable bean-validator component true Boolean camel.component.bean-validator.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 47.4. OSGi deployment To use Hibernate Validator in the OSGi environment use dedicated ValidationProviderResolver implementation, just as org.apache.camel.component.bean.validator.HibernateValidationProviderResolver . The snippet below demonstrates this approach. Keep in mind that you can use HibernateValidationProviderResolver starting from the Camel 2.13.0. Using HibernateValidationProviderResolver from("direct:test"). to("bean-validator://ValidationProviderResolverTest?validationProviderResolver=#myValidationProviderResolver"); ... <bean id="myValidationProviderResolver" class="org.apache.camel.component.bean.validator.HibernateValidationProviderResolver"/> If no custom ValidationProviderResolver is defined and the validator component has been deployed into the OSGi environment, the HibernateValidationProviderResolver will be automatically used. 47.5. Example Assumed we have a java bean with the following annotations Car.java public class Car { @NotNull private String manufacturer; @NotNull @Size(min = 5, max = 14, groups = OptionalChecks.class) private String licensePlate; // getter and setter } and an interface definition for our custom validation group OptionalChecks.java public interface OptionalChecks { } with the following Camel route, only the @NotNull constraints on the attributes manufacturer and licensePlate will be validated (Camel uses the default group javax.validation.groups.Default ). from("direct:start") .to("bean-validator://x") .to("mock:end") If you want to check the constraints from the group OptionalChecks , you have to define the route like this from("direct:start") .to("bean-validator://x?group=OptionalChecks") .to("mock:end") If you want to check the constraints from both groups, you have to define a new interface first AllChecks.java @GroupSequence({Default.class, OptionalChecks.class}) public interface AllChecks { } and then your route definition should looks like this from("direct:start") .to("bean-validator://x?group=AllChecks") .to("mock:end") And if you have to provide your own message interpolator, traversable resolver and constraint validator factory, you have to write a route like this <bean id="myMessageInterpolator" class="my.ConstraintValidatorFactory" /> <bean id="myTraversableResolver" class="my.TraversableResolver" /> <bean id="myConstraintValidatorFactory" class="my.ConstraintValidatorFactory" /> from("direct:start") .to("bean-validator://x?group=AllChecks&messageInterpolator=#myMessageInterpolator &traversableResolver=#myTraversableResolver&constraintValidatorFactory=#myConstraintValidatorFactory") .to("mock:end") It's also possible to describe your constraints as XML and not as Java annotations. In this case, you have to provide the file META-INF/validation.xml which could looks like this validation.xml <?xml version="1.0" encoding="UTF-8"?> <validation-config xmlns="http://jboss.org/xml/ns/javax/validation/configuration" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/configuration"> <default-provider>org.hibernate.validator.HibernateValidator</default-provider> <message-interpolator>org.hibernate.validator.engine.ResourceBundleMessageInterpolator</message-interpolator> <traversable-resolver>org.hibernate.validator.engine.resolver.DefaultTraversableResolver</traversable-resolver> <constraint-validator-factory>org.hibernate.validator.engine.ConstraintValidatorFactoryImpl</constraint-validator-factory> <constraint-mapping>/constraints-car.xml</constraint-mapping> </validation-config> and the constraints-car.xml file constraints-car.xml <?xml version="1.0" encoding="UTF-8"?> <constraint-mappings xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://jboss.org/xml/ns/javax/validation/mapping validation-mapping-1.0.xsd" xmlns="http://jboss.org/xml/ns/javax/validation/mapping"> <default-package>org.apache.camel.component.bean.validator</default-package> <bean class="CarWithoutAnnotations" ignore-annotations="true"> <field name="manufacturer"> <constraint annotation="javax.validation.constraints.NotNull" /> </field> <field name="licensePlate"> <constraint annotation="javax.validation.constraints.NotNull" /> <constraint annotation="javax.validation.constraints.Size"> <groups> <value>org.apache.camel.component.bean.validator.OptionalChecks</value> </groups> <element name="min">5</element> <element name="max">14</element> </constraint> </field> </bean> </constraint-mappings> Here is the XML syntax for the example route definition where OrderedChecks can be https://github.com/apache/camel/blob/master/components/camel-bean-validator/src/test/java/org/apache/camel/component/bean/validator/OrderedChecks.java Note that the body should include an instance of a class to validate. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <camelContext id="camel" xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <to uri="bean-validator://x?group=org.apache.camel.component.bean.validator.OrderedChecks"/> </route> </camelContext> </beans> 47.6. See Also Configuring Camel Component Endpoint Getting Started | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-bean-validator</artifactId> <version>x.y.z</version> <!-- use the same version as your Camel core version --> </dependency>",
"bean-validator:label[?options]",
"bean-validator://label[?options]",
"bean-validator:label",
"from(\"direct:test\"). to(\"bean-validator://ValidationProviderResolverTest?validationProviderResolver=#myValidationProviderResolver\"); <bean id=\"myValidationProviderResolver\" class=\"org.apache.camel.component.bean.validator.HibernateValidationProviderResolver\"/>",
"public class Car { @NotNull private String manufacturer; @NotNull @Size(min = 5, max = 14, groups = OptionalChecks.class) private String licensePlate; // getter and setter }",
"public interface OptionalChecks { }",
"from(\"direct:start\") .to(\"bean-validator://x\") .to(\"mock:end\")",
"from(\"direct:start\") .to(\"bean-validator://x?group=OptionalChecks\") .to(\"mock:end\")",
"@GroupSequence({Default.class, OptionalChecks.class}) public interface AllChecks { }",
"from(\"direct:start\") .to(\"bean-validator://x?group=AllChecks\") .to(\"mock:end\")",
"<bean id=\"myMessageInterpolator\" class=\"my.ConstraintValidatorFactory\" /> <bean id=\"myTraversableResolver\" class=\"my.TraversableResolver\" /> <bean id=\"myConstraintValidatorFactory\" class=\"my.ConstraintValidatorFactory\" /> from(\"direct:start\") .to(\"bean-validator://x?group=AllChecks&messageInterpolator=#myMessageInterpolator &traversableResolver=#myTraversableResolver&constraintValidatorFactory=#myConstraintValidatorFactory\") .to(\"mock:end\")",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <validation-config xmlns=\"http://jboss.org/xml/ns/javax/validation/configuration\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://jboss.org/xml/ns/javax/validation/configuration\"> <default-provider>org.hibernate.validator.HibernateValidator</default-provider> <message-interpolator>org.hibernate.validator.engine.ResourceBundleMessageInterpolator</message-interpolator> <traversable-resolver>org.hibernate.validator.engine.resolver.DefaultTraversableResolver</traversable-resolver> <constraint-validator-factory>org.hibernate.validator.engine.ConstraintValidatorFactoryImpl</constraint-validator-factory> <constraint-mapping>/constraints-car.xml</constraint-mapping> </validation-config>",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <constraint-mappings xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://jboss.org/xml/ns/javax/validation/mapping validation-mapping-1.0.xsd\" xmlns=\"http://jboss.org/xml/ns/javax/validation/mapping\"> <default-package>org.apache.camel.component.bean.validator</default-package> <bean class=\"CarWithoutAnnotations\" ignore-annotations=\"true\"> <field name=\"manufacturer\"> <constraint annotation=\"javax.validation.constraints.NotNull\" /> </field> <field name=\"licensePlate\"> <constraint annotation=\"javax.validation.constraints.NotNull\" /> <constraint annotation=\"javax.validation.constraints.Size\"> <groups> <value>org.apache.camel.component.bean.validator.OptionalChecks</value> </groups> <element name=\"min\">5</element> <element name=\"max\">14</element> </constraint> </field> </bean> </constraint-mappings>",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext id=\"camel\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <to uri=\"bean-validator://x?group=org.apache.camel.component.bean.validator.OrderedChecks\"/> </route> </camelContext> </beans>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/bean-validator-component |
18.12.11.2. Limiting Number of Connections | 18.12.11.2. Limiting Number of Connections To limit the number of connections a guest virtual machine may establish, a rule must be provided that sets a limit of connections for a given type of traffic. If for example a VM is supposed to be allowed to only ping one other IP address at a time and is supposed to have only one active incoming ssh connection at a time. Example 18.10. XML sample file that sets limits to connections The following XML fragment can be used to limit connections Note Limitation rules must be listed in the XML prior to the rules for accepting traffic. According to the XML file in Example 18.10, "XML sample file that sets limits to connections" , an additional rule for allowing DNS traffic sent to port 22 go out the guest virtual machine, has been added to avoid ssh sessions not getting established for reasons related to DNS lookup failures by the ssh daemon. Leaving this rule out may result in the ssh client hanging unexpectedly as it tries to connect. Additional caution should be used in regards to handling timeouts related to tracking of traffic. An ICMP ping that the user may have terminated inside the guest virtual machine may have a long timeout in the host physical machine's connection tracking system and will therefore not allow another ICMP ping to go through. The best solution is to tune the timeout in the host physical machine's sysfs with the following command:# echo 3 > /proc/sys/net/netfilter/nf_conntrack_icmp_timeout . This command sets the ICMP connection tracking timeout to 3 seconds. The effect of this is that once one ping is terminated, another one can start after 3 seconds. If for any reason the guest virtual machine has not properly closed its TCP connection, the connection to be held open for a longer period of time, especially if the TCP timeout value was set for a large amount of time on the host physical machine. In addition, any idle connection may result in a time out in the connection tracking system which can be re-activated once packets are exchanged. However, if the limit is set too low, newly initiated connections may force an idle connection into TCP backoff. Therefore, the limit of connections should be set rather high so that fluctuations in new TCP connections do not cause odd traffic behavior in relation to idle connections. | [
"[...] <rule action='drop' direction='in' priority='400'> <tcp connlimit-above='1'/> </rule> <rule action='accept' direction='in' priority='500'> <tcp dstportstart='22'/> </rule> <rule action='drop' direction='out' priority='400'> <icmp connlimit-above='1'/> </rule> <rule action='accept' direction='out' priority='500'> <icmp/> </rule> <rule action='accept' direction='out' priority='500'> <udp dstportstart='53'/> </rule> <rule action='drop' direction='inout' priority='1000'> <all/> </rule> [...]"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sub-sect-lim-numb-conns |
Chapter 116. IdM integration with Red Hat products | Chapter 116. IdM integration with Red Hat products Find documentation for other Red Hat products that integrate with IdM. You can configure these products to allow your IdM users to access their services. Ansible Automation Platform OpenShift Container Platform Red Hat OpenStack Platform Red Hat Satellite Red Hat Single Sign-On Red Hat Virtualization | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/ref_idm-integration-with-other-red-hat-products_configuring-and-managing-idm |
1.3. Configuring the iptables Firewall to Allow Cluster Components | 1.3. Configuring the iptables Firewall to Allow Cluster Components Note The ideal firewall configuration for cluster components depends on the local environment, where you may need to take into account such considerations as whether the nodes have multiple network interfaces or whether off-host firewalling is present. The example here, which opens the ports that are generally required by a Pacemaker cluster, should be modified to suit local conditions. Table 1.1, "Ports to Enable for High Availability Add-On" shows the ports to enable for the Red Hat High Availability Add-On and provides an explanation for what the port is used for. You can enable all of these ports by means of the firewalld daemon by executing the following commands. Table 1.1. Ports to Enable for High Availability Add-On Port When Required TCP 2224 Required on all nodes (needed by the pcsd Web UI and required for node-to-node communication) It is crucial to open port 2224 in such a way that pcs from any node can talk to all nodes in the cluster, including itself. When using the Booth cluster ticket manager or a quorum device you must open port 2224 on all related hosts, such as Booth arbiters or the quorum device host. TCP 3121 Required on all nodes if the cluster has any Pacemaker Remote nodes Pacemaker's crmd daemon on the full cluster nodes will contact the pacemaker_remoted daemon on Pacemaker Remote nodes at port 3121. If a separate interface is used for cluster communication, the port only needs to be open on that interface. At a minimum, the port should open on Pacemaker Remote nodes to full cluster nodes. Because users may convert a host between a full node and a remote node, or run a remote node inside a container using the host's network, it can be useful to open the port to all nodes. It is not necessary to open the port to any hosts other than nodes. TCP 5403 Required on the quorum device host when using a quorum device with corosync-qnetd . The default value can be changed with the -p option of the corosync-qnetd command. UDP 5404 Required on corosync nodes if corosync is configured for multicast UDP UDP 5405 Required on all corosync nodes (needed by corosync ) TCP 21064 Required on all nodes if the cluster contains any resources requiring DLM (such as clvm or GFS2 ) TCP 9929, UDP 9929 Required to be open on all cluster nodes and booth arbitrator nodes to connections from any of those same nodes when the Booth ticket manager is used to establish a multi-site cluster. | [
"firewall-cmd --permanent --add-service=high-availability firewall-cmd --add-service=high-availability"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-firewalls-haar |
Appendix G. Object Storage Daemon (OSD) configuration options | Appendix G. Object Storage Daemon (OSD) configuration options The following are Ceph Object Storage Daemon (OSD) configuration options that can be set during deployment. You can set these configuration options with the ceph config set osd CONFIGURATION_OPTION VALUE command. osd_uuid Description The universally unique identifier (UUID) for the Ceph OSD. Type UUID Default The UUID. Note The osd uuid applies to a single Ceph OSD. The fsid applies to the entire cluster. osd_data Description The path to the OSD's data. You must create the directory when deploying Ceph. Mount a drive for OSD data at this mount point. Type String Default /var/lib/ceph/osd/USDcluster-USDid osd_max_write_size Description The maximum size of a write in megabytes. Type 32-bit Integer Default 90 osd_client_message_size_cap Description The largest client data message allowed in memory. Type 64-bit Integer Unsigned Default 500MB default. 500*1024L*1024L osd_class_dir Description The class path for RADOS class plug-ins. Type String Default USDlibdir/rados-classes osd_max_scrubs Description The maximum number of simultaneous scrub operations for a Ceph OSD. Type 32-bit Int Default 1 osd_scrub_thread_timeout Description The maximum time in seconds before timing out a scrub thread. Type 32-bit Integer Default 60 osd_scrub_finalize_thread_timeout Description The maximum time in seconds before timing out a scrub finalize thread. Type 32-bit Integer Default 60*10 osd_scrub_begin_hour Description This restricts scrubbing to this hour of the day or later. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing the entire day. Along with osd_scrub_end_hour , they define a time window, in which the scrubs can happen. But a scrub is performed no matter whether the time window allows or not, as long as the placement group's scrub interval exceeds osd_scrub_max_interval . Type Integer Default 0 Allowed range [0,23] osd_scrub_end_hour Description This restricts scrubbing to the hour earlier than this. Use osd_scrub_begin_hour = 0 and osd_scrub_end_hour = 0 to allow scrubbing for the entire day. Along with osd_scrub_begin_hour , they define a time window, in which the scrubs can happen. But a scrub is performed no matter whether the time window allows or not, as long as the placement group's scrub interval exceeds osd_scrub_max_interval . Type Integer Default 0 Allowed range [0,23] osd_scrub_load_threshold Description The maximum load. Ceph will not scrub when the system load (as defined by the getloadavg() function) is higher than this number. Default is 0.5 . Type Float Default 0.5 osd_scrub_min_interval Description The minimum interval in seconds for scrubbing the Ceph OSD when the Red Hat Ceph Storage cluster load is low. Type Float Default Once per day. 60*60*24 osd_scrub_max_interval Description The maximum interval in seconds for scrubbing the Ceph OSD irrespective of cluster load. Type Float Default Once per week. 7*60*60*24 osd_scrub_interval_randomize_ratio Description Takes the ratio and randomizes the scheduled scrub between osd scrub min interval and osd scrub max interval . Type Float Default 0.5 . mon_warn_not_scrubbed Description Number of seconds after osd_scrub_interval to warn about any PGs that were not scrubbed. Type Integer Default 0 (no warning). osd_scrub_chunk_min Description The object store is partitioned into chunks which end on hash boundaries. For chunky scrubs, Ceph scrubs objects one chunk at a time with writes blocked for that chunk. The osd scrub chunk min setting represents the minimum number of chunks to scrub. Type 32-bit Integer Default 5 osd_scrub_chunk_max Description The maximum number of chunks to scrub. Type 32-bit Integer Default 25 osd_scrub_sleep Description The time to sleep between deep scrub operations. Type Float Default 0 (or off). osd_scrub_during_recovery Description Allows scrubbing during recovery. Type Bool Default false osd_scrub_invalid_stats Description Forces extra scrub to fix stats marked as invalid. Type Bool Default true osd_scrub_priority Description Controls queue priority of scrub operations versus client I/O. Type Unsigned 32-bit Integer Default 5 osd_requested_scrub_priority Description The priority set for user requested scrub on the work queue. If this value were to be smaller than osd_client_op_priority , it can be boosted to the value of osd_client_op_priority when scrub is blocking client operations. Type Unsigned 32-bit Integer Default 120 osd_scrub_cost Description Cost of scrub operations in megabytes for queue scheduling purposes. Type Unsigned 32-bit Integer Default 52428800 osd_deep_scrub_interval Description The interval for deep scrubbing, that is fully reading all data. The osd scrub load threshold parameter does not affect this setting. Type Float Default Once per week. 60*60*24*7 osd_deep_scrub_stride Description Read size when doing a deep scrub. Type 32-bit Integer Default 512 KB. 524288 mon_warn_not_deep_scrubbed Description Number of seconds after osd_deep_scrub_interval to warn about any PGs that were not scrubbed. Type Integer Default 0 (no warning) osd_deep_scrub_randomize_ratio Description The rate at which scrubs will randomly become deep scrubs (even before osd_deep_scrub_interval has passed). Type Float Default 0.15 or 15% osd_deep_scrub_update_digest_min_age Description How many seconds old objects must be before scrub updates the whole-object digest. Type Integer Default 7200 (120 hours) osd_deep_scrub_large_omap_object_key_threshold Description Warning when you encounter an object with more OMAP keys than this. Type Integer Default 200000 osd_deep_scrub_large_omap_object_value_sum_threshold Description Warning when you encounter an object with more OMAP key bytes than this. Type Integer Default 1 G osd_op_num_shards Description The number of shards for client operations. Type 32-bit Integer Default 0 osd_op_num_threads_per_shard Description The number of threads per shard for client operations. Type 32-bit Integer Default 0 osd_op_num_shards_hdd Description The number of shards for HDD operations. Type 32-bit Integer Default 5 osd_op_num_threads_per_shard_hdd Description The number of threads per shard for HDD operations. Type 32-bit Integer Default 1 osd_op_num_shards_ssd Description The number of shards for SSD operations. Type 32-bit Integer Default 8 osd_op_num_threads_per_shard_ssd Description The number of threads per shard for SSD operations. Type 32-bit Integer Default 2 osd_client_op_priority Description The priority set for client operations. It is relative to osd recovery op priority . Type 32-bit Integer Default 63 Valid Range 1-63 osd_recovery_op_priority Description The priority set for recovery operations. It is relative to osd client op priority . Type 32-bit Integer Default 3 Valid Range 1-63 osd_op_thread_timeout Description The Ceph OSD operation thread timeout in seconds. Type 32-bit Integer Default 15 osd_op_complaint_time Description An operation becomes complaint worthy after the specified number of seconds have elapsed. Type Float Default 30 osd_disk_threads Description The number of disk threads, which are used to perform background disk intensive OSD operations such as scrubbing and snap trimming. Type 32-bit Integer Default 1 osd_op_history_size Description The maximum number of completed operations to track. Type 32-bit Unsigned Integer Default 20 osd_op_history_duration Description The oldest completed operation to track. Type 32-bit Unsigned Integer Default 600 osd_op_log_threshold Description How many operations logs to display at once. Type 32-bit Integer Default 5 osd_op_timeout Description The time in seconds after which running OSD operations time out. Type Integer Default 0 Important Do not set the osd op timeout option unless your clients can handle the consequences. For example, setting this parameter on clients running in virtual machines can lead to data corruption because the virtual machines interpret this timeout as a hardware failure. osd_max_backfills Description The maximum number of backfill operations allowed to or from a single OSD. Type 64-bit Unsigned Integer Default 1 osd_backfill_scan_min Description The minimum number of objects per backfill scan. Type 32-bit Integer Default 64 osd_backfill_scan_max Description The maximum number of objects per backfill scan. Type 32-bit Integer Default 512 osd_backfill_full_ratio Description Refuse to accept backfill requests when the Ceph OSD's full ratio is above this value. Type Float Default 0.85 osd_backfill_retry_interval Description The number of seconds to wait before retrying backfill requests. Type Double Default 30.000000 osd_map_dedup Description Enable removing duplicates in the OSD map. Type Boolean Default true osd_map_cache_size Description The size of the OSD map cache in megabytes. Type 32-bit Integer Default 50 osd_map_cache_bl_size Description The size of the in-memory OSD map cache in OSD daemons. Type 32-bit Integer Default 50 osd_map_cache_bl_inc_size Description The size of the in-memory OSD map cache incrementals in OSD daemons. Type 32-bit Integer Default 100 osd_map_message_max Description The maximum map entries allowed per MOSDMap message. Type 32-bit Integer Default 40 osd_snap_trim_thread_timeout Description The maximum time in seconds before timing out a snap trim thread. Type 32-bit Integer Default 60*60*1 osd_pg_max_concurrent_snap_trims Description The max number of parallel snap trims/PG. This controls how many objects per PG to trim at once. Type 32-bit Integer Default 2 osd_snap_trim_sleep Description Insert a sleep between every trim operation a PG issues. Type 32-bit Integer Default 0 osd_max_trimming_pgs Description The max number of trimming PGs Type 32-bit Integer Default 2 osd_backlog_thread_timeout Description The maximum time in seconds before timing out a backlog thread. Type 32-bit Integer Default 60*60*1 osd_default_notify_timeout Description The OSD default notification timeout (in seconds). Type 32-bit Integer Unsigned Default 30 osd_check_for_log_corruption Description Check log files for corruption. Can be computationally expensive. Type Boolean Default false osd_remove_thread_timeout Description The maximum time in seconds before timing out a remove OSD thread. Type 32-bit Integer Default 60*60 osd_command_thread_timeout Description The maximum time in seconds before timing out a command thread. Type 32-bit Integer Default 10*60 osd_command_max_records Description Limits the number of lost objects to return. Type 32-bit Integer Default 256 osd_auto_upgrade_tmap Description Uses tmap for omap on old objects. Type Boolean Default true osd_tmapput_sets_users_tmap Description Uses tmap for debugging only. Type Boolean Default false osd_preserve_trimmed_log Description Preserves trimmed log files, but uses more disk space. Type Boolean Default false osd_recovery_delay_start Description After peering completes, Ceph delays for the specified number of seconds before starting to recover objects. Type Float Default 0 osd_recovery_max_active Description The number of active recovery requests per OSD at one time. More requests will accelerate recovery, but the requests place an increased load on the cluster. Type 32-bit Integer Default 0 osd_recovery_max_chunk Description The maximum size of a recovered chunk of data to push. Type 64-bit Integer Unsigned Default 8388608 osd_recovery_threads Description The number of threads for recovering data. Type 32-bit Integer Default 1 osd_recovery_thread_timeout Description The maximum time in seconds before timing out a recovery thread. Type 32-bit Integer Default 30 osd_recover_clone_overlap Description Preserves clone overlap during recovery. Should always be set to true . Type Boolean Default true rados_osd_op_timeout Description Number of seconds that RADOS waits for a response from the OSD before returning an error from a RADOS operation. A value of 0 means no limit. Type Double Default 0 | [
"IMPORTANT: Red Hat does not recommend changing the default."
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/configuration_guide/osd-object-storage-daemon-configuration-options_conf |
Chapter 4. Using JBoss Data Grid with Supported Containers | Chapter 4. Using JBoss Data Grid with Supported Containers Red Hat JBoss Data Grid can be used in the following runtimes: Java SE, started by your application. As a standalone JBoss Data Grid server. Bundled as a library in your application, deployed to an application server, and started by your application. For example, JBoss Data Grid can be used with Tomcat or Weblogic. Inside an OSGi runtime environment, in this case, Apache Karaf. For a list of containers supported with Red Hat JBoss Data Grid, see the Release Notes or the support information here: https://access.redhat.com/knowledge/articles/115883 Report a bug 4.1. Deploy JBoss Data Grid in JBoss EAP (Library Mode) Red Hat JBoss Data Grid provides a set of modules for Red Hat JBoss Enterprise Application Platform 6.x. Using these modules means that JBoss Data Grid libraries do not need to be included in the user deployment. To avoid conflicts with the Infinispan modules that are already included with JBoss EAP, the JBoss Data Grid modules are placed within a separate slot and identified by the JBoss Data Grid version ( major . minor ). Note The JBoss EAP modules are not included in JBoss EAP. Instead, navigate to the Customer Support Portal at http://access.redhat.com to download these modules from the Red Hat JBoss Data Grid downloads page. To deploy JBoss Data Grid in JBoss EAP, add dependencies from the JBoss Data Grid module to the application's classpath (the JBoss EAP deployer) in one of the following ways: Add a dependency to the jboss-deployment-structure.xml file. Add a dependency to the MANIFEST.MF file. Generate the MANIFEST.MF file via Maven. Add a Dependency to the jboss-deployment-structure.xml File Add the following configuration to the jboss-deployment-structure.xml file: Note For details about the jboss-deployment-structure.xml file, see the Red Hat JBoss Enterprise Application Platform documentation. Add a Dependency to the MANIFEST.MF File. Add a dependency to the MANIFEST.MF files as follows: Example 4.1. Example MANIFEST.MF File The first line remains the same as the example. Depending on the dependency required, add one of the following to the second line of the file: JBoss Data Grid Core: Embedded Query: JDBC Cache Store: JPA Cache Store: LevelDB Cache Store: CDI: Generate the MANIFEST.MF file via Maven The MANIFEST.MF file is generated during the build (specifically during the JAR or WAR process). As an alternative to adding a dependency to the MANIFEST.MF file, configure the dependency directly in Maven by adding the following to the pom.xml file: Report a bug | [
"<jboss-deployment-structure xmlns=\"urn:jboss:deployment-structure:1.2\"> <deployment> <dependencies> <module name=\"org.infinispan\" slot=\"jdg-6.6\" services=\"export\"/> </dependencies> </deployment> </jboss-deployment-structure>",
"Manifest-Version: 1.0 Dependencies: org.infinispan:jdg-6.6 services",
"Dependencies: org.infinispan:jdg-6.6 services",
"Dependencies: org.infinispan:jdg-6.6 services, org.infinispan.query:jdg-6.6 services, org.infinispan.query.dsl:jdg-6.6 services",
"Dependencies: org.infinispan:jdg-6.6 services, org.infinispan.persistence.jdbc:jdg-6.6 services",
"Dependencies: org.infinispan:jdg-6.6 services, org.infinispan.persistence.jpa:jdg-6.6 services",
"Dependencies: org.infinispan:jdg-6.6 services, org.infinispan.persistence.leveldb:jdg-6.6 services",
"Dependencies: org.infinispan:jdg-6.6 services, org.infinispan.cdi:jdg-6.6 meta-inf",
"<plugin> <artifactId>maven-war-plugin</artifactId> <version>2.4</version> <configuration> <failOnMissingWebXml>false</failOnMissingWebXml> <archive> <manifestEntries> <Dependencies>org.infinispan:jdg-6.6 services</Dependencies> </manifestEntries> </archive> </configuration> </plugin>"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/chap-Using_JBoss_Data_Grid_with_Supported_Containers |
7.176. perl-GSSAPI | 7.176. perl-GSSAPI 7.176.1. RHBA-2012:1340 - perl-GSSAPI bug fix update Updated perl-GSSAPI packages that fix one bug are now available for Red Hat Enterprise Linux 6. The perl-GSSAPI packages provide Perl extension for GSSAPIv2 access. Bug Fix BZ# 657274 Prior to this update, the perl-GSSAPI specification file used a krb5-devel file which was removed. As a consequence, the perl-GSSAPI package could not be rebuilt. This update modifies the specification file to use the current krb5-devel files. All users of perl-GSSAPI are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/perl-gssapi |
Chapter 6. Automatically building Dockerfiles with Build workers | Chapter 6. Automatically building Dockerfiles with Build workers Red Hat Quay supports building Dockerfiles using a set of worker nodes on OpenShift Container Platform or Kubernetes. Build triggers, such as GitHub webhooks, can be configured to automatically build new versions of your repositories when new code is committed. This document shows you how to enable Builds with your Red Hat Quay installation, and set up one more more OpenShift Container Platform or Kubernetes clusters to accept Builds from Red Hat Quay. 6.1. Setting up Red Hat Quay Builders with OpenShift Container Platform You must pre-configure Red Hat Quay Builders prior to using it with OpenShift Container Platform. 6.1.1. Configuring the OpenShift Container Platform TLS component The tls component allows you to control TLS configuration. Note Red Hat Quay does not support Builders when the TLS component is managed by the Red Hat Quay Operator. If you set tls to unmanaged , you supply your own ssl.cert and ssl.key files. In this instance, if you want your cluster to support Builders, you must add both the Quay route and the Builder route name to the SAN list in the certificate; alternatively you can use a wildcard. To add the builder route, use the following format: [quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name] 6.1.2. Preparing OpenShift Container Platform for Red Hat Quay Builders Prepare Red Hat Quay Builders for OpenShift Container Platform by using the following procedure. Prerequisites You have configured the OpenShift Container Platform TLS component. Procedure Enter the following command to create a project where Builds will be run, for example, builder : USD oc new-project builder Create a new ServiceAccount in the the builder namespace by entering the following command: USD oc create sa -n builder quay-builder Enter the following command to grant a user the edit role within the builder namespace: USD oc policy add-role-to-user -n builder edit system:serviceaccount:builder:quay-builder Enter the following command to retrieve a token associated with the quay-builder service account in the builder namespace. This token is used to authenticate and interact with the OpenShift Container Platform cluster's API server. USD oc sa get-token -n builder quay-builder Identify the URL for the OpenShift Container Platform cluster's API server. This can be found in the OpenShift Container Platform Web Console. Identify a worker node label to be used when schedule Build jobs . Because Build pods need to run on bare metal worker nodes, typically these are identified with specific labels. Check with your cluster administrator to determine exactly which node label should be used. Optional. If the cluster is using a self-signed certificate, you must get the Kube API Server's certificate authority (CA) to add to Red Hat Quay's extra certificates. Enter the following command to obtain the name of the secret containing the CA: USD oc get sa openshift-apiserver-sa --namespace=openshift-apiserver -o json | jq '.secrets[] | select(.name | contains("openshift-apiserver-sa-token"))'.name Obtain the ca.crt key value from the secret in the OpenShift Container Platform Web Console. The value begins with "-----BEGIN CERTIFICATE-----"` . Import the CA in Red Hat Quay using the Config Tool. Ensure that the name of this file matches K8S_API_TLS_CA . Create the following SecurityContextConstraints resource for the ServiceAccount : apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: quay-builder priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny volumes: - '*' allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - '*' allowedUnsafeSysctls: - '*' defaultAddCapabilities: null fsGroup: type: RunAsAny --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: quay-builder-scc namespace: builder rules: - apiGroups: - security.openshift.io resourceNames: - quay-builder resources: - securitycontextconstraints verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: quay-builder-scc namespace: builder subjects: - kind: ServiceAccount name: quay-builder roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: quay-builder-scc 6.1.3. Configuring Red Hat Quay Builders Use the following procedure to enable Red Hat Quay Builders. Procedure Ensure that your Red Hat Quay config.yaml file has Builds enabled, for example: FEATURE_BUILD_SUPPORT: True Add the following information to your Red Hat Quay config.yaml file, replacing each value with information that is relevant to your specific installation: Note Currently, only the Build feature itself can be enabled by the Red Hat Quay Config Tool. The configuration of the Build Manager and Executors must be done manually in the config.yaml file. BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ ORCHESTRATOR: REDIS_HOST: quay-redis-host REDIS_PASSWORD: quay-redis-password REDIS_SSL: true REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetes BUILDER_NAMESPACE: builder K8S_API_SERVER: api.openshift.somehost.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 5120Mi CONTAINER_CPU_LIMITS: 1000m CONTAINER_MEMORY_REQUEST: 3968Mi CONTAINER_CPU_REQUEST: 500m NODE_SELECTOR_LABEL_KEY: beta.kubernetes.io/instance-type NODE_SELECTOR_LABEL_VALUE: n1-standard-4 CONTAINER_RUNTIME: podman SERVICE_ACCOUNT_NAME: ***** SERVICE_ACCOUNT_TOKEN: ***** QUAY_USERNAME: quay-username QUAY_PASSWORD: quay-password WORKER_IMAGE: <registry>/quay-quay-builder WORKER_TAG: some_tag BUILDER_VM_CONTAINER_IMAGE: <registry>/quay-quay-builder-qemu-rhcos:v3.4.0 SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 SSH_AUTHORIZED_KEYS: - ssh-rsa 12345 [email protected] - ssh-rsa 67890 [email protected] For more information about each configuration field, see 6.2. OpenShift Container Platform Routes limitations The following limitations apply when you are using the Red Hat Quay Operator on OpenShift Container Platform with a managed route component: Currently, OpenShift Container Platform Routes are only able to serve traffic to a single port. Additional steps are required to set up Red Hat Quay Builds. Ensure that your kubectl or oc CLI tool is configured to work with the cluster where the Red Hat Quay Operator is installed and that your QuayRegistry exists; the QuayRegistry does not have to be on the same bare metal cluster where Builders run. Ensure that HTTP/2 ingress is enabled on the OpenShift cluster by following these steps . The Red Hat Quay Operator creates a Route resource that directs gRPC traffic to the Build manager server running inside of the existing Quay pod, or pods. If you want to use a custom hostname, or a subdomain like <builder-registry.example.com> , ensure that you create a CNAME record with your DNS provider that points to the status.ingress[0].host of the create Route resource. For example: Using the OpenShift Container Platform UI or CLI, update the Secret referenced by spec.configBundleSecret of the QuayRegistry with the Build cluster CA certificate. Name the key extra_ca_cert_build_cluster.cert . Update the config.yaml file entry with the correct values referenced in the Builder configuration that you created when you configured Red Hat Quay Builders, and add the BUILDMAN_HOSTNAME CONFIGURATION FIELD: BUILDMAN_HOSTNAME: <build-manager-hostname> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 600 ORCHESTRATOR: REDIS_HOST: <quay_redis_host REDIS_PASSWORD: <quay_redis_password> REDIS_SSL: true REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetes BUILDER_NAMESPACE: builder ... 1 The externally accessible server hostname which the build jobs use to communicate back to the Build manager. Default is the same as SERVER_HOSTNAME . For OpenShift Route , it is either status.ingress[0].host or the CNAME entry if using a custom hostname. BUILDMAN_HOSTNAME must include the port number, for example, somehost:443 for an OpenShift Container Platform Route, as the gRPC client used to communicate with the build manager does not infer any port if omitted. 6.3. Troubleshooting Builds The Builder instances started by the Build manager are ephemeral. This means that they will either get shut down by Red Hat Quay on timeouts or failure, or garbage collected by the control plane (EC2/K8s). In order to obtain the Build logs, you must do so while the Builds are running. 6.3.1. DEBUG config flag The DEBUG flag can be set to true in order to prevent the Builder instances from getting cleaned up after completion or failure. For example: EXECUTORS: - EXECUTOR: ec2 DEBUG: true ... - EXECUTOR: kubernetes DEBUG: true ... When set to true , the debug feature prevents the Build nodes from shutting down after the quay-builder service is done or fails. It also prevents the Build manager from cleaning up the instances by terminating EC2 instances or deleting Kubernetes jobs. This allows debugging Builder node issues. Debugging should not be set in a production cycle. The lifetime service still exists; for example, the instance still shuts down after approximately two hours. When this happens, EC2 instances are terminated, and Kubernetes jobs are completed. Enabling debug also affects the ALLOWED_WORKER_COUNT , because the unterminated instances and jobs still count toward the total number of running workers. As a result, the existing Builder workers must be manually deleted if ALLOWED_WORKER_COUNT is reached to be able to schedule new Builds. Setting DEBUG will also affect ALLOWED_WORKER_COUNT , as the unterminated instances/jobs will still count towards the total number of running workers. This means the existing builder workers will need to manually be deleted if ALLOWED_WORKER_COUNT is reached to be able to schedule new Builds. 6.3.2. Troubleshooting OpenShift Container Platform and Kubernetes Builds Use the following procedure to troubleshooting OpenShift Container Platform Kubernetes Builds. Procedure Create a port forwarding tunnel between your local machine and a pod running with either an OpenShift Container Platform cluster or a Kubernetes cluster by entering the following command: USD oc port-forward <builder_pod> 9999:2222 Establish an SSH connection to the remote host using a specified SSH key and port, for example: USD ssh -i /path/to/ssh/key/set/in/ssh_authorized_keys -p 9999 core@localhost Obtain the quay-builder service logs by entering the following commands: USD systemctl status quay-builder USD journalctl -f -u quay-builder 6.4. Setting up Github builds If your organization plans to have Builds be conducted by pushes to Github, or Github Enterprise, continue with Creating an OAuth application in GitHub . | [
"[quayregistry-cr-name]-quay-builder-[ocp-namespace].[ocp-domain-name]",
"oc new-project builder",
"oc create sa -n builder quay-builder",
"oc policy add-role-to-user -n builder edit system:serviceaccount:builder:quay-builder",
"oc sa get-token -n builder quay-builder",
"oc get sa openshift-apiserver-sa --namespace=openshift-apiserver -o json | jq '.secrets[] | select(.name | contains(\"openshift-apiserver-sa-token\"))'.name",
"apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: name: quay-builder priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny volumes: - '*' allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - '*' allowedUnsafeSysctls: - '*' defaultAddCapabilities: null fsGroup: type: RunAsAny --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: quay-builder-scc namespace: builder rules: - apiGroups: - security.openshift.io resourceNames: - quay-builder resources: - securitycontextconstraints verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: quay-builder-scc namespace: builder subjects: - kind: ServiceAccount name: quay-builder roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: quay-builder-scc",
"FEATURE_BUILD_SUPPORT: True",
"BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ ORCHESTRATOR: REDIS_HOST: quay-redis-host REDIS_PASSWORD: quay-redis-password REDIS_SSL: true REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetes BUILDER_NAMESPACE: builder K8S_API_SERVER: api.openshift.somehost.org:6443 K8S_API_TLS_CA: /conf/stack/extra_ca_certs/build_cluster.crt VOLUME_SIZE: 8G KUBERNETES_DISTRIBUTION: openshift CONTAINER_MEMORY_LIMITS: 5120Mi CONTAINER_CPU_LIMITS: 1000m CONTAINER_MEMORY_REQUEST: 3968Mi CONTAINER_CPU_REQUEST: 500m NODE_SELECTOR_LABEL_KEY: beta.kubernetes.io/instance-type NODE_SELECTOR_LABEL_VALUE: n1-standard-4 CONTAINER_RUNTIME: podman SERVICE_ACCOUNT_NAME: ***** SERVICE_ACCOUNT_TOKEN: ***** QUAY_USERNAME: quay-username QUAY_PASSWORD: quay-password WORKER_IMAGE: <registry>/quay-quay-builder WORKER_TAG: some_tag BUILDER_VM_CONTAINER_IMAGE: <registry>/quay-quay-builder-qemu-rhcos:v3.4.0 SETUP_TIME: 180 MINIMUM_RETRY_THRESHOLD: 0 SSH_AUTHORIZED_KEYS: - ssh-rsa 12345 [email protected] - ssh-rsa 67890 [email protected]",
"kubectl get -n <namespace> route <quayregistry-name>-quay-builder -o jsonpath={.status.ingress[0].host}",
"BUILDMAN_HOSTNAME: <build-manager-hostname> 1 BUILD_MANAGER: - ephemeral - ALLOWED_WORKER_COUNT: 1 ORCHESTRATOR_PREFIX: buildman/production/ JOB_REGISTRATION_TIMEOUT: 600 ORCHESTRATOR: REDIS_HOST: <quay_redis_host REDIS_PASSWORD: <quay_redis_password> REDIS_SSL: true REDIS_SKIP_KEYSPACE_EVENT_SETUP: false EXECUTORS: - EXECUTOR: kubernetes BUILDER_NAMESPACE: builder",
"EXECUTORS: - EXECUTOR: ec2 DEBUG: true - EXECUTOR: kubernetes DEBUG: true",
"oc port-forward <builder_pod> 9999:2222",
"ssh -i /path/to/ssh/key/set/in/ssh_authorized_keys -p 9999 core@localhost",
"systemctl status quay-builder",
"journalctl -f -u quay-builder"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/use_red_hat_quay/build-support |
19.3. The Virtual Hardware Details Window | 19.3. The Virtual Hardware Details Window The virtual hardware details window displays information about the virtual hardware configured for the guest. Virtual hardware resources can be added, removed and modified in this window. To access the virtual hardware details window, click the icon in the toolbar. Figure 19.3. The virtual hardware details icon Clicking the icon displays the virtual hardware details window. Figure 19.4. The virtual hardware details window 19.3.1. Applying Boot Options to Guest Virtual Machines Using virt-manager you can select how the guest virtual machine will act on boot. The boot options will not take effect until the guest virtual machine reboots. You can either power down the virtual machine before making any changes, or you can reboot the machine afterwards. If you do not do either of these options, the changes will happen the time the guest reboots. Procedure 19.1. Configuring boot options From the Virtual Machine Manager Edit menu, select Virtual Machine Details . From the side panel, select Boot Options and then complete any or all of the following optional steps: To indicate that this guest virtual machine should start each time the host physical machine boots, select the Autostart check box. To indicate the order in which guest virtual machine should boot, click the Enable boot menu check box. After this is checked, you can then check the devices you want to boot from and using the arrow keys change the order that the guest virtual machine will use when booting. If you want to boot directly from the Linux kernel, expand the Direct kernel boot menu. Fill in the Kernel path , Initrd path , and the Kernel arguments that you want to use. Click Apply . Figure 19.5. Configuring boot options 19.3.2. Attaching USB Devices to a Guest Virtual Machine Note In order to attach the USB device to the guest virtual machine, you first must attach it to the host physical machine and confirm that the device is working. If the guest is running, you need to shut it down before proceeding. Procedure 19.2. Attaching USB devices using Virt-Manager Open the guest virtual machine's Virtual Machine Details screen. Click Add Hardware In the Add New Virtual Hardware popup, select USB Host Device , select the device you want to attach from the list and Click Finish . Figure 19.6. Add USB Device To use the USB device in the guest virtual machine, start the guest virtual machine. 19.3.3. USB Redirection USB re-direction is best used in cases where there is a host physical machine that is running in a data center. The user connects to his/her guest virtual machine from a local machine or thin client. On this local machine there is a SPICE client. The user can attach any USB device to the thin client and the SPICE client will redirect the device to the host physical machine on the data center so it can be used by the VM that is running on the thin client. Procedure 19.3. Redirecting USB devices Open the guest virtual machine's Virtual Machine Details screen. Click Add Hardware In the Add New Virtual Hardware popup, select USB Redirection . Make sure to select Spice channel from the Type drop-down menu and click Finish . Figure 19.7. Add New Virtual Hardware window Open the Virtual Machine menu and select Redirect USB device . A pop-up window opens with a list of USB devices. Figure 19.8. Select a USB device Select a USB device for redirection by checking its check box and click OK . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-managing_guests_with_the_virtual_machine_manager_virt_manager-the_virtual_hardware_details_window |
A.2. Reserved Keywords | A.2. Reserved Keywords Keyword Usage ADD add set option ALL standard aggregate function , function , query expression body , query term select clause , quantified comparison predicate ALTER alter , alter column options , alter options AND between predicate , boolean term ANY standard aggregate function , quantified comparison predicate ARRAY_AGG ordered aggregate function AS alter , array table , create procedure , option namespace , create table , create trigger , derived column , dynamic data statement , function , loop statement , xml namespace element , object table , select derived column , table subquery , text table , table name , with list element , xml serialize , xml table ASC sort specification ATOMIC compound statement , for each row trigger action BEGIN compound statement , for each row trigger action BETWEEN between predicate BIGDECIMAL data type BIGINT data type BIGINTEGER data type BLOB data type , xml serialize BOOLEAN data type BOTH function BREAK branching statement BY group by clause , order by clause , window specification BYTE data type CALL callable statement , call statement CASE case expression , searched case expression CAST function CHAR function , data type CLOB data type , xml serialize COLUMN alter column options CONSTRAINT create table body CONTINUE branching statement CONVERT function CREATE create procedure , create foreign temp table , create table , create temporary table , create trigger , procedure body definition CROSS cross join DATE data type DAY function DECIMAL data type DECLARE declare statement DEFAULT table element , xml namespace element , object table column , procedure parameter , xml table column DELETE alter , create trigger , delete statement DESC sort specification DISTINCT standard aggregate function , function , query expression body , query term , select clause DOUBLE data type DROP drop option , drop table EACH for each row trigger action ELSE case expression , if statement , searched case expression END case expression , compound statement , for each row trigger action , searched case expression ERROR raise error statement ESCAPE match predicate , text table EXCEPT query expression body EXEC dynamic data statement , call statement EXECUTE dynamic data statement , call statement EXISTS exists predicate FALSE non numeric literal FETCH fetch clause FILTER filter clause FLOAT data type FOR for each row trigger action , function , text aggregate function , xml table column FOREIGN alter options , create procedure , create foreign temp table , create table , foreign key FROM delete statement , from clause , function FULL qualified table FUNCTION create procedure GROUP group by clause HAVING having clause HOUR function IF if statement IMMEDIATE dynamic data statement IN procedure parameter , in predicate INNER qualified table INOUT procedure parameter INSERT alter , create trigger , function , insert statement INTEGER data type INTERSECT query term INTO dynamic data statement , insert statement , into clause IS is null predicate JOIN cross join , qualified table LANGUAGE object table LATERAL table subquery LEADING function LEAVE branching statement LEFT function , qualified table LIKE match predicate LIKE_REGEX like regex predicate LIMIT limit clause LOCAL create temporary table LONG data type LOOP loop statement MAKEDEP option clause , table primary MAKENOTDEP option clause , table primary MERGE insert statement MINUTE function MONTH function NO xml namespace element , text table column , text table NOCACHE option clause NOT between predicate , compound statement , table element , is null predicate , match predicate , boolean factor , procedure parameter , procedure result column , like regex predicate , in predicate , temporary table element NULL table element , is null predicate , non numeric literal , procedure parameter , procedure result column , temporary table element , xml query OBJECT data type OF alter , create trigger OFFSET limit clause ON alter , create foreign temp table , create trigger , loop statement , qualified table , xml query ONLY fetch clause OPTION option clause OPTIONS alter options list , options clause OR boolean value expression ORDER order by clause OUT procedure parameter OUTER qualified table OVER window specification PARAMETER alter column options PARTITION window specification PRIMARY table element , create temporary table , primary key PROCEDURE alter , alter options , create procedure , procedure body definition REAL data type REFERENCES foreign key RETURN assignment statement , return statement , data statement RETURNS create procedure RIGHT function , qualified table ROW fetch clause , for each row trigger action , limit clause , text table ROWS fetch clause , limit clause SECOND function SELECT select clause SET add set option , option namespace , update statement SHORT data type SIMILAR match predicate SMALLINT data type SOME standard aggregate function , quantified comparison predicate SQLEXCEPTION sql exception SQLSTATE sql exception SQLWARNING raise statement STRING dynamic data statement , data type , xml serialize TABLE alter options , create procedure , create foreign temp table , create table , create temporary table , drop table , query primary , table subquery TEMPORARY create foreign temp table , create temporary table THEN case expression , searched case expression TIME data type TIMESTAMP data type TINYINT data type TO match predicate TRAILING function TRANSLATE function TRIGGER alter , create trigger TRUE non numeric literal UNION cross join , query expression body UNIQUE other constraints , table element UNKNOWN non numeric literal UPDATE alter , create trigger , dynamic data statement , update statement USER function USING dynamic data statement VALUES insert statement VARBINARY data type , xml serialize VARCHAR data type , xml serialize VIRTUAL alter options , create procedure , create table , procedure body definition WHEN case expression , searched case expression WHERE filter clause , where clause WHILE while statement WITH assignment statement , query expression , data statement WITHOUT assignment statement , data statement XML data type XMLAGG ordered aggregate function XMLATTRIBUTES xml attributes XMLCOMMENT function XMLCONCAT function XMLELEMENT xml element XMLFOREST xml forest XMLNAMESPACES xml namespaces XMLPARSE xml parse XMLPI function XMLQUERY xml query XMLSERIALIZE xml serialize XMLTABLE xml table YEAR function | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/Reserved_Keywords |
Chapter 2. Maintenance support | Chapter 2. Maintenance support 2.1. Maintenance support for JBoss EAP XP When a new JBoss EAP XP major version is released, maintenance support for the major version begins. Maintenance support usually lasts for 12 weeks. If you use a JBoss EAP XP major version that is outside of its maintenance support length, you might experience issues as the development of security patches and bug fixes no longer apply. To avoid such issues, upgrade to the newest JBoss EAP XP major version release that is compatible with your JBoss EAP version. Additional resources For information about maintenance support, see the Red Hat JBoss Enterprise Application Platform expansion pack (JBoss EAP XP or EAP XP) Life Cycle and Support Policies located on the Red Hat Customer Portal. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/red_hat_jboss_eap_xp_3.0.0_release_notes/maintenance_support |
Chapter 2. Using Red Hat Universal Base Images (standard, minimal, and runtimes) | Chapter 2. Using Red Hat Universal Base Images (standard, minimal, and runtimes) Red Hat Enterprise Linux (RHEL) base images are meant to form the foundation for the container images you build. As of April 2019, new Universal Base Image (UBI) versions of RHEL standard, minimal, init, and Red Hat Software Collections images are available that add to those images the ability to be freely redistributed. Characteristics of RHEL base images include: Supported : Supported by Red Hat for use with your containerized applications. Contains the same secured, tested, and certified software packages you have in Red Hat Enterprise Linux. Cataloged : Listed in the Red Hat Container Catalog , where you can find descriptions, technical details, and a health index for each image. Updated : Offered with a well-defined update schedule, so you know you are getting the latest software (see Red Hat Container Image Updates ). Tracked : Tracked by errata, to help you understand the changes that go into each update. Reusable : Only need to be downloaded and cached in your production environment once, where each base image can be reused by all containers that include it as their foundation. Red Hat Universal Base Images (UBI) provide the same quality RHEL software for building container images as their predecessors ( rhel6 , rhel7 , rhel-init , and rhel-minimal base images), but offer more freedom in how they are used and distributed. 2.1. What are Red Hat base images? Red Hat provides multiple base images that you can use as a starting point for your own images. These images are available through the Red Hat Registry (registry.access.redhat.com and registry.redhat.io) and described in the Red Hat Container Catalog . For RHEL 7, there are two different versions of each standard, minimal and init base image available. Red Hat also provides a set of Red Hat Software Collections images that you can build on when you are creating containers for applications that require specific runtimes. These include python, php, nodejs, and others. Although Red Hat does not offer tools for running containers on RHEL 6 systems, it does offer RHEL 6 container images you can use. There are standard ( rhel6 ) and Init ( rhel6-init ) base image available for RHEL 6, but no minimal RHEL 6 image. Likewise, there are no RHEL 6 UBI images. 2.1.1. Using standard Red Hat base images There is a legacy rhel7/rhel image and a UBI ubi7 image on which you can add your own software or additional RHEL 7 software. The contents are nearly identical, with the main differences that the former requires a RHEL paid subscription and the two images draw from different image registries and yum repositories. Standard RHEL base images have a robust set of software features that include the following: init system : All the features of the systemd initialization system you need to manage systemd services are available in the standard base images. The rhel6 base images include a minimalistic init system similar to System V. These init systems let you install RPM packages that are pre-configured to start up services automatically, such as a Web server (httpd) or FTP server (vsftpd). yum : Software needed to install software packages is included via the standard set of yum commands ( yum , yum-config-manager , yumdownloader , and so on). When a legacy standard base image is run on a RHEL system, you will be able to enable repositories and add packages as you do directly on a RHEL system, while using entitlements available on the host. For the UBI base images, you have access to free yum repositories for adding and updating software. utilities : The standard base image includes some useful utilities for working inside the container. Utilities that are in this base image that are not in the minimal images include ps , tar , cpio , dmidecode , gzip , lsmod (and other module commands), getfacl (and other acl commands), dmsetup (and other device mapper commands), and others. python : Python runtime libraries and modules (currently Python 2.7) are included in the standard base image. No python packages are included in the minimal base image. 2.1.2. Using minimal Red Hat base images The legacy rhel7-minimal (or rhel7-atomic ) and UBI ubi7-minimal images are stripped-down RHEL images to use when a bare-bones base image in desired. If you are looking for the smallest possible base image to use as part of the larger Red Hat ecosystem, you can start with these minimal images. RHEL minimal images provide a base for your own container images that is less than half the size of the standard image, while still being able to draw on RHEL software repositories and maintain any compliance requirements your software has. Here are some features of the minimal base images: Small size : Minimal images are about 75M on disk and 28M compressed. This makes it less than half the size of the standard images. Software installation (microdnf) : Instead of including the full-blown yum facility for working with software repositories and RPM software packages, the minimal images includes the microdnf utility. Microdnf is a scaled-down version of dnf. It includes only what is needed to enable and disable repositories, as well as install, remove, and update packages. It also has a clean option, to clean out cache after packages have been installed. Based on RHEL packaging : Because minimal images incorporate regular RHEL software RPM packages, with a few features removed such as extra language files or documentation, you can continue to rely on RHEL repositories for building your images. This allows you to still maintain compliance requirements you have that are based on RHEL software. Features of minimal images make them perfect for trying out applications you want to run with RHEL, while carrying the smallest possible amount of overhead. If your goal is just to try to run some simple binaries or pre-packaged software that doesn't have a lot of requirements from the operating system, the minimal images might suit your needs. If your application does have dependencies on other software from RHEL, you can simply use microdnf to install the needed packages at build time. Here are some challenges related to using minimal images: Common utilities missing : What you don't get with minimal images is an initialization and service management system (systemd or System V init), a Python run-time environment, and a bunch of common shell utilities. Although you can install that software later, installing a large set of software, such as systemd, might actually make the container larger than it would be if you were to just use the init container. Older minimal images not supported : Red Hat intends for you to always use the latest version of the minimal images, which is implied by simply requesting rhel-minimal or ubi7-minimal . Red Hat does not expect to support older versions of minimal images going forward. Modules for microdnf are not supported : Modules used with the dnf command let you install multiple versions of the same software, when available. The microdnf utility included with minimal images does not support modules. So if modules are required, you should use a non-minimal image (such as the standard or init UBI images, which both include yum). 2.1.3. Using Init Red Hat base images The legacy rhel7-init and UBI ubi7-init images contains the systemd initialization system, making them useful for building images in which you want to run systemd services, such as a web server or file server. The Init image contents are less than what you get with the standard images, but more than what is in the minimal images. Historically, Red Hat Enterprise Linux base container images were designed for Red Hat customers to run enterprise applications, but were not free to redistribute. This can create challenges for some organizations that need to redistribute their applications. That's where the Red Hat Universal Base Images come in. 2.2. How are UBI images different? UBI images were created so you can build your container images on a foundation of official Red Hat software that can be freely shared and deployed. From a technical perspective, they are nearly identical to legacy Red Hat Enterprise Linux images, which means they have great security, performance, and life cycles, but they are released under a different End User License Agreement. Here are some attributes of Red Hat UBI images: Built from a subset of RHEL content : Red Hat Universal Base images are built from a subset of normal Red Hat Enterprise Linux content. All of the content used to build selected UBI images is released in a publicly available set of yum repositories. This lets you install extra packages, as well as update any package in UBI base images. Redistributable : The intent of UBI images is to allow Red Hat customers, partners, ISVs, and others to standardize on one container base image, allowing users to focus on application needs instead of distribution rules. These images can be shared and run in any environment capable of running those images. As long as you follow some basic guidelines, you will be able to freely redistribute your UBI-based images. Base and RHSCL images : Besides the three types of base images, UBI versions of some Red Hat Software Collections (RHSCL) runtime images are available as well. These RHSCL images provide a foundation for applications that can benefit from standard, supported runtimes such as python, php, nodejs, and ruby. Enabled yum repositories : The following yum repositories are enabled within each RHEL 7 UBI image: The ubi-7 repo holds the redistributable subset of RHEL packages you can include in your container. The ubi-7-rhscl repo holds Red Hat Software Collections packages that you can add to a UBI image to help you standardize the environments you use with applications that require particular runtimes. The ubi-7-rhah repo includes RHEL Atomic Host packages needed to manage subscriptions and microdnf (the tiny yum replacement used to install RPM packages on the minimal images). Note that some versions of ubi-minimal images do not have this repo enabled by default. The ubi-7-optional repo includes packages from the RHEL server optional repository. Licensing : You are free to use and redistribute UBI images, provided you adhere to the Red Hat Universal Base Image End User Licensing Agreement . Adding UBI RPMs : You can add RPM packages to UBI images from preconfigured UBI repositories. If you happen to be in a disconnected environment, you must whitelist the UBI Content Delivery Network ( https://cdn-ubi.redhat.com ) to use that feature. See the Connect to https://cdn-ubi.redhat.com solution for details. Although the legacy RHEL base images will continue to be supported, UBI images are recommended going forward. For that reason, examples in the rest of this chapter are done with UBI images. 2.3. Get UBI images To find the current set of available Red Hat UBI images, refer to Universal Base Images (UBI): Images, repositories, and packages or search the Red Hat Container Catalog . 2.4. Pull UBI images To pull UBI images to your system so you can use them with tools such as podman, buildah or skopeo, type the following: To check that the images are available on your system, type: When pulled in this way, images are available and usable by podman , buildah , skopeo and the CRI-O container image, but they are not available to the Docker service or docker command. To use these images with Docker, you can run docker pull instead. 2.5. Redistributing UBI images After you pull a UBI image, you are free to push it to your own registry and share it with others. You can upgrade or add to that image from UBI yum repositories as you like. Here is an example of how to push a UBI image to your own or another third-party repository: While there are few restrictions on how you use this image, there are some restrictions about how you can refer to it. For example, you can't call that image Red Hat certified or Red Hat supported unless you certify it through the Red Hat Partner Connect Program , either with Red Hat Container Certification or Red Hat OpenShift Operator Certification. 2.6. Run UBI images To start a container from a UBI image and run the bash shell in that image (so you can look around inside), do the following (type exit when you are done): While in the container: Run rpm -qa to see a list of package inside each container. Type yum list available to see packages available to add to the image from the UBI yum repos. (The yum command is not available in the ubi-minimal containers.) Get source code, as described in the "Getting UBI Container Image Source Code," later in this chapter. On systems that include the Docker service, you can use docker run instead. 2.7. Add software to a running UBI container UBI images are built from 100% Red Hat content. These UBI images also provide a subset of Red Hat Enterprise Linux packages which are freely available to install for use with UBI. To add or update software, UBI images are pre-configured to point to the freely available yum repositories that hold official Red Hat RPMs. To add packages from UBI repos to running UBI containers: On ubi images, the yum command is installed to let you draw packages On ubi-minimal images, the microdnf command (with a smaller feature set) is included instead of yum . Keep in mind that installing and working with software packages directly in running containers is just for adding packages temporarily or learning about the repos. Refer to the "Build a UBI-based image" for more permanent ways of building UBI-based images. When you add software to a UBI container, procedures differ for updating UBI images on a subscribed RHEL host or on an unsubscribed (or non-RHEL) system. Those two ways of working with UBI images are illustrated below. 2.7.1. Adding software to a UBI container (subscribed host) If you are running a UBI container on a registered and subscribed RHEL host, the main RHEL Server repository is enabled inside the standard UBI container, along with all the UBI repos. So the full set of Red Hat packages is available. From the UBI minimal container, All UBI repos are enabled by default, but no repos are enabled from the host by default. 2.7.2. Adding software inside the standard UBI container To ensure the containers you build can be redistributed, disable subscription management in the standard UBI image when you add software. If you disable the subscription-manager plugin, only packages from the freely available repos are used when you add software. With a shell open inside a standard UBI base image container ( ubi7/ubi ) from a subscribed RHEL host, run the following command to add a package to that container (for example, the bzip2 package): To add software inside a standard UBI container that is in the RHEL server repo, but not in UBI repos, leave the subscription-manager plugin intact and just install the package: To install a package that is in a different host repo from inside the standard UBI container, you have to explicitly enable the repo you need. For example: Warning Installing Red Hat packages that are not inside the Red Hat UBI repos might limit how widely you can distribute the container outside of subscribed hosts. 2.7.3. Adding software inside the minimal UBI container UBI yum repositories are enabled inside the UBI minimal image by default. To install the same package demonstrated earlier (bzip2) from one of those UBI yum repositories on a subscribed RHEL host from the UBI minimal container, type: To install packages inside a minimal UBI container from repos available on a subscribed host that are not part of a UBI yum repo, you would have to explicitly enable those repos. For example: Warning Using non-UBI RHEL repositories to install packages in your UBI images could restrict your ability to share those images to run outside of subscribed RHEL systems. 2.7.4. Adding software to a UBI container (unsubscribed host) To add software packages to a running container that is either on an unsubscribed RHEL host or some other Linux system, you don't have to disable the subscription-manager plugin. For example: To install that package on a subscribed RHEL host from the UBI minimal container, type: As noted earlier, both of these means of adding software to a running UBI container are not intended for creating permanent UBI-based container images. For that, you should build new layers on to UBI images, as described in the following section. 2.8. Build a UBI-based image You can build UBI-based container images in the same way you build other images, with one exception. You should disable Red Hat subscriptions when you actually build the images, if you want to be sure that your image only contains Red Hat software that you can redistribute. Here's an example of creating a UBI-based Web server container from a Dockerfile with the buildah utility: Note For ubl7/ubi-minimal images, use microdnf instead of yum below: Create a Dockerfile : Add a Dockerfile with the following contents to a new directory: Build the new image : While in that directory, use buildah to create a new UBI layered image: Test : Test the UBI layered webserver image: 2.9. Using Red Hat Software Collections runtime images Red Hat Software Collections offers another set of container images that you can use as the basis for your container builds. These images are built on RHEL standard base images, with some already updated as UBI images. Each of these images include additional software you might want to use for specific runtime environments. So, if you expect to build multiple images that require, for example, php runtime software, you can use provide a more consistent platform for those images by starting with a PHP software collections image. Here are examples of Red Hat Software Collections container images built on UBI base images, that are available from the Red Hat Registry (registry.access.redhat.com or registry.redhat.io): ubi7/php-72 : PHP 7.2 platform for building and running applications ubi7/nodejs-8 : Node.js 8 platform for building and running applications. Used by Node.js 8 Source-To-Image builds ubi7/ruby-25 : Ruby 2.5 platform for building and running applications ubi7/python-27 : Python 2.7 platform for building and running applications ubi7/python-36 : Python 3.6 platform for building and running applications ubi7/s2i-core : Base image with essential libraries and tools used as a base for builder images like perl, python, ruby, and so on ubi7/s2i-base : Base image for Source-to-Image builds Because these UBI images container the same basic software as their legacy image counterparts, you can learn about those images from the Using Red Hat Software Collections Container Images guide. Be sure to use the UBI image names to pull those images. Red Hat Software Collections container images are updated every time RHEL base images are updated. Search the Red Hat Container Catalog for details on any of these images. For more information on update schedules, see Red Hat Container Image Updates . 2.10. Getting UBI Container Image Source Code You can download the source code for all UBI base images (excluding the minimal images) by starting up those images with a bash shell and running the following set of commands from inside that container: The source code RPM for each binary RPM package is downloaded to the current directory. Because the UBI minimal images include a subset of RPMs from the regular UBI images, running the yumdownloader loop just shown will get you the minimal image packages as well. 2.11. Tips and tricks for using UBI images Here are a few issues to consider when working with UBI images: Hundreds of RPM packages used in existing Red Hat Software Collections runtime images are stored in the yum repositories packaged with the new UBI images. Feel free to install those RPMs on your UBI images to emulate the runtime (python, php, nodejs, etc.) that interests you. Because some language files and documentation have been stripped out of the minimal UBI image ( ubi7/ubi-minimal ), running rpm -Va inside that container will show the contents of many packages as being missing or modified. If having a complete list of files inside that container is important to you, consider using a tool such as Tripwire to record the files in the container and check it later. After a layered image has been created, use podman history to check which UBI image it was built on. For example, after completing the webserver example shown earlier, type podman history johndoe/webserver to see that the image it was built on includes the image ID of the UBI image you added on the FROM line of the Dockerfile. 2.12. How to request new features in UBI? Red Hat partners and customers can request new features, including package requests, by filing a support ticket through standard methods. Non-Red Hat customers do not receive support, but can file requests through the standard Red Hat Bugzilla for the appropriate RHEL product. See also: Red Hat Bugzilla Queue 2.13. How to file a support case for UBI? Red Hat partners and customers can file support tickets through standard methods when running UBI on a supported Red Hat platform (OpenShift/RHEL). Red Hat support staff will guide partners and customers See also: Open a Support Case | [
"podman pull registry.access.redhat.com/ubi7/ubi:latest podman pull registry.access.redhat.com/ubi7/ubi-minimal:latest",
"podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi7/ubi-minimal latest c94a444803e3 8 hours ago 80.9 MB registry.access.redhat.com/ubi7/ubi latest 40b488f87628 17 hours ago 214 MB",
"podman pull registry.redhat.io/ubi7/ubi podman tag registry.access.redhat.com/ubi7/ubi registry.example.com:5000/ubi7/ubi podman push registry.example.com:5000/ubi7/ubi",
"podman run --rm -it registry.access.redhat.com/ubi7/ubi-minimal:latest /bin/bash podman run --rm -it registry.access.redhat.com/ubi7/ubi:latest /bin/bash bash-4.2#",
"yum install --disableplugin=subscription-manager bzip2",
"yum install zsh",
"yum install --enablerepo=rhel-7-server-optional-rpms zsh-html",
"microdnf install bzip2",
"microdnf install --enablerepo=rhel-7-server-rpms zsh microdnf install --enablerepo=rhel-7-server-rpms --enablerepo=rhel-7-server-optional-rpms zsh-html",
"yum install bzip2",
"microdnf install bzip2",
"RUN microdnf update -y && rm -rf /var/cache/yum RUN microdnf install httpd -y && microdnf clean all",
"FROM registry.access.redhat.com/ubi7/ubi USER root LABEL maintainer=\"John Doe\" Update image RUN yum update --disableplugin=subscription-manager -y && rm -rf /var/cache/yum RUN yum install --disableplugin=subscription-manager httpd -y && rm -rf /var/cache/yum Add default Web page and expose port RUN echo \"The Web Server is Running\" > /var/www/html/index.html EXPOSE 80 Start the service CMD [\"-D\", \"FOREGROUND\"] ENTRYPOINT [\"/usr/sbin/httpd\"]",
"buildah bud -t johndoe/webserver . STEP 1: FROM registry.access.redhat.com/ubi7/ubi:latest STEP 2: USER root STEP 3: MAINTAINER John Doe STEP 4: RUN yum update --disableplugin=subscription-manager -y . . . No packages marked for update STEP 5: RUN yum install --disableplugin=subscription-manager httpd -y Loaded plugins: ovl, product-id, search-disabled-repos Resolving Dependencies --> Running transaction check ============================================================= Package Arch Version Repository Size ============================================================= Installing: httpd x86_64 2.4.6-88.el7 ubi-7 1.2 M Installing for dependencies: apr x86_64 1.4.8-3.el7_4.1 ubi-7 103 k apr-util x86_64 1.5.2-6.el7 ubi-7 92 k httpd-tools x86_64 2.4.6-88.el7 ubi-7 90 k mailcap noarch 2.1.41-2.el7 ubi-7 31 k redhat-logos noarch 70.0.3-7.el7 ubi-7 13 M Transaction Summary Complete! STEP 6: RUN echo \"The Web Server is Running\" > /var/www/html/index.html STEP 7: EXPOSE 80 STEP 8: CMD [\"-D\", \"FOREGROUND\"] STEP 9: ENTRYPOINT [\"/usr/sbin/httpd\"] STEP 10: COMMIT Writing manifest to image destination Storing signatures --> 36a604cc0dd3657b46f8762d7ef69873f65e16343b54c63096e636c80f0d68c7",
"podman run -d -p 80:80 johndoe/webserver bbe98c71d18720d966e4567949888dc4fb86eec7d304e785d5177168a5965f64 curl http://localhost/index.html The Web Server is Running",
"for i in `rpm -qa` do yumdownloader --source USDi done"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/getting_started_with_containers/using_red_hat_universal_base_images_standard_minimal_and_runtimes |
Chapter 13. Configure InfiniBand and RDMA Networks | Chapter 13. Configure InfiniBand and RDMA Networks 13.1. Understanding InfiniBand and RDMA technologies InfiniBand refers to two distinct things. The first is a physical link-layer protocol for InfiniBand networks. The second is a higher level programming API called the InfiniBand Verbs API. The InfiniBand Verbs API is an implementation of a remote direct memory access ( RDMA ) technology. RDMA provides direct access from the memory of one computer to the memory of another without involving either computer's operating system. This technology enables high-throughput, low-latency networking with low CPU utilization, which is especially useful in massively parallel computer clusters. In a typical IP data transfer, application X on machine A sends some data to application Y on machine B. As part of the transfer, the kernel on machine B must first receive the data, decode the packet headers, determine that the data belongs to application Y, wake up application Y, wait for application Y to perform a read syscall into the kernel, then it must manually copy the data from the kernel's own internal memory space into the buffer provided by application Y. This process means that most network traffic must be copied across the system's main memory bus at least twice (once when the host adapter uses DMA to put the data into the kernel-provided memory buffer, and again when the kernel moves the data to the application's memory buffer) and it also means the computer must execute a number of context switches to switch between kernel context and application Y context. Both of these things impose extremely high CPU loads on the system when network traffic is flowing at very high rates and can make other tasks to slow down. RDMA communications differ from normal IP communications because they bypass kernel intervention in the communication process, and in the process greatly reduce the CPU overhead normally needed to process network communications. The RDMA protocol allows the host adapter in the machine to know when a packet comes in from the network, which application should receive that packet, and where in the application's memory space it should go. Instead of sending the packet to the kernel to be processed and then copied into the user application's memory, it places the contents of the packet directly in the application's buffer without any further intervention necessary. However, it cannot be accomplished using the standard Berkeley Sockets API that most IP networking applications are built upon, so it must provide its own API, the InfiniBand Verbs API, and applications must be ported to this API before they can use RDMA technology directly. Red Hat Enterprise Linux 7 supports both the InfiniBand hardware and the InfiniBand Verbs API. In addition, there are two additional supported technologies that allow the InfiniBand Verbs API to be utilized on non-InfiniBand hardware: The Internet Wide Area RDMA Protocol (iWARP) iWARP is a computer networking protocol that implements remote direct memory access (RDMA) for efficient data transfer over Internet Protocol (IP) networks. The RDMA over Converged Ethernet (RoCE) protocol, which later renamed to InfiniBand over Ethernet (IBoE). RoCE is a network protocol that allows remote direct memory access (RDMA) over an Ethernet network. Prerequisites Both iWARP and RoCE technologies have a normal IP network link layer as their underlying technology, and so the majority of their configuration is actually covered in Chapter 3, Configuring IP Networking . For the most part, once their IP networking features are properly configured, their RDMA features are all automatic and will show up as long as the proper drivers for the hardware are installed. The kernel drivers are always included with each kernel Red Hat provides, however the user-space drivers must be installed manually if the InfiniBand package group was not selected at machine install time. Since Red Hat Enterprise Linux 7.4, all RDMA user-space drivers are merged into the rdma-core package. To install all supported iWARP, RoCE or InfiniBand user-space drivers, enter as root : If you are using Priority Flow Control (PFC) and mlx4-based cards, then edit /etc/modprobe.d/mlx4.conf to instruct the driver which packet priority is configured for the " no-drop " service on the Ethernet switches the cards are plugged into and rebuild the initramfs to include the modified file. Newer mlx5-based cards auto-negotiate PFC settings with the switch and do not need any module option to inform them of the " no-drop " priority or priorities. To set the Mellanox cards to use one or both ports in Ethernet mode, see Section 13.5.4, "Configuring Mellanox cards for Ethernet operation" . With these driver packages installed (in addition to the normal RDMA packages typically installed for any InfiniBand installation), a user should be able to utilize most of the normal RDMA applications to test and see RDMA protocol communication taking place on their adapters. However, not all of the programs included in Red Hat Enterprise Linux 7 properly support iWARP or RoCE/IBoE devices. This is because the connection establishment protocol on iWARP in particular is different than it is on real InfiniBand link-layer connections. If the program in question uses the librdmacm connection management library, it handles the differences between iWARP and InfiniBand silently and the program should work. If the application tries to do its own connection management, then it must specifically support iWARP or else it does not work. | [
"~]# yum install libibverbs"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/ch-Configure_InfiniBand_and_RDMA_Networks |
function::user_short_warn | function::user_short_warn Name function::user_short_warn - Retrieves a short value stored in user space Synopsis Arguments addr the user space address to retrieve the short from Description Returns the short value from a given user space address. Returns zero when user space and warns (but does not abort) about the failure. | [
"user_short_warn:long(addr:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-short-warn |
30.4. Administering VDO | 30.4. Administering VDO 30.4.1. Starting or Stopping VDO To start a given VDO volume, or all VDO volumes, and the associated UDS index(es), storage management utilities should invoke one of these commands: The VDO systemd unit is installed and enabled by default when the vdo package is installed. This unit automatically runs the vdo start --all command at system startup to bring up all activated VDO volumes. See Section 30.4.6, "Automatically Starting VDO Volumes at System Boot" for more information. To stop a given VDO volume, or all VDO volumes, and the associated UDS index(es), use one of these commands: Stopping a VDO volume takes time based on the speed of your storage device and the amount of data that the volume needs to write: The volume always writes around 1GiB for every 1GiB of the UDS index. With a sparse UDS index, the volume additionally writes the amount of data equal to the block map cache size plus up to 8MiB per slab. If restarted after an unclean shutdown, VDO will perform a rebuild to verify the consistency of its metadata and will repair it if necessary. Rebuilds are automatic and do not require user intervention. See Section 30.4.5, "Recovering a VDO Volume After an Unclean Shutdown" for more information on the rebuild process. VDO might rebuild different writes depending on the write mode: In synchronous mode, all writes that were acknowledged by VDO prior to the shutdown will be rebuilt. In asynchronous mode, all writes that were acknowledged prior to the last acknowledged flush request will be rebuilt. In either mode, some writes that were either unacknowledged or not followed by a flush may also be rebuilt. For details on VDO write modes, see Section 30.4.2, "Selecting VDO Write Modes" . 30.4.2. Selecting VDO Write Modes VDO supports three write modes, sync , async , and auto : When VDO is in sync mode, the layers above it assume that a write command writes data to persistent storage. As a result, it is not necessary for the file system or application, for example, to issue FLUSH or Force Unit Access (FUA) requests to cause the data to become persistent at critical points. VDO must be set to sync mode only when the underlying storage guarantees that data is written to persistent storage when the write command completes. That is, the storage must either have no volatile write cache, or have a write through cache. When VDO is in async mode, the data is not guaranteed to be written to persistent storage when a write command is acknowledged. The file system or application must issue FLUSH or FUA requests to ensure data persistence at critical points in each transaction. VDO must be set to async mode if the underlying storage does not guarantee that data is written to persistent storage when the write command completes; that is, when the storage has a volatile write back cache. For information on how to find out if a device uses volatile cache or not, see the section called "Checking for a Volatile Cache" . Warning When VDO is running in async mode, it is not compliant with Atomicity, Consistency, Isolation, Durability (ACID). When there is an application or a file system that assumes ACID compliance on top of the VDO volume, async mode might cause unexpected data loss. The auto mode automatically selects sync or async based on the characteristics of each device. This is the default option. For a more detailed theoretical overview of how write policies operate, see the section called "Overview of VDO Write Policies" . To set a write policy, use the --writePolicy option. This can be specified either when creating a VDO volume as in Section 30.3.3, "Creating a VDO Volume" or when modifying an existing VDO volume with the changeWritePolicy subcommand: Important Using the incorrect write policy might result in data loss on power failure. Checking for a Volatile Cache To see whether a device has a writeback cache, read the /sys/block/ block_device /device/scsi_disk/ identifier /cache_type sysfs file. For example: Device sda indicates that it has a writeback cache: Device sdb indicates that it does not have a writeback cache: Additionally, in the kernel boot log, you can find whether the above mentioned devices have a write cache or not: See the Viewing and Managing Log Files chapter in the System Administrator's Guide for more information on reading the system log. In these examples, use the following write policies for VDO: async mode for the sda device sync mode for the sdb device Note You should configure VDO to use the sync write policy if the cache_type value is none or write through . 30.4.3. Removing VDO Volumes A VDO volume can be removed from the system by running: Prior to removing a VDO volume, unmount file systems and stop applications that are using the storage. The vdo remove command removes the VDO volume and its associated UDS index, as well as logical volumes where they reside. 30.4.3.1. Removing an Unsuccessfully Created Volume If a failure occurs when the vdo utility is creating a VDO volume, the volume is left in an intermediate state. This might happen when, for example, the system crashes, power fails, or the administrator interrupts a running vdo create command. To clean up from this situation, remove the unsuccessfully created volume with the --force option: The --force option is required because the administrator might have caused a conflict by changing the system configuration since the volume was unsuccessfully created. Without the --force option, the vdo remove command fails with the following message: 30.4.4. Configuring the UDS Index VDO uses a high-performance deduplication index called UDS to detect duplicate blocks of data as they are being stored. The deduplication window is the number of previously written blocks which the index remembers. The size of the deduplication window is configurable. For a given window size, the index will requires a specific amount of RAM and a specific amount of disk space. The size of the window is usually determined by specifying the size of the index memory using the --indexMem= size option. The amount of disk space to use will then be determined automatically. In general, Red Hat recommends using a sparse UDS index for all production use cases. This is an extremely efficient indexing data structure, requiring approximately one-tenth of a byte of DRAM per block in its deduplication window. On disk, it requires approximately 72 bytes of disk space per block. The minimum configuration of this index uses 256 MB of DRAM and approximately 25 GB of space on disk. To use this configuration, specify the --sparseIndex=enabled --indexMem=0.25 options to the vdo create command. This configuration results in a deduplication window of 2.5 TB (meaning it will remember a history of 2.5 TB). For most use cases, a deduplication window of 2.5 TB is appropriate for deduplicating storage pools that are up to 10 TB in size. The default configuration of the index, however, is to use a dense index. This index is considerably less efficient (by a factor of 10) in DRAM, but it has much lower (also by a factor of 10) minimum required disk space, making it more convenient for evaluation in constrained environments. In general, a deduplication window which is one quarter of the physical size of a VDO volume is a recommended configuration. However, this is not an actual requirement. Even small deduplication windows (compared to the amount of physical storage) can find significant amounts of duplicate data in many use cases. Larger windows may also be used, but it in most cases, there will be little additional benefit to doing so. Speak with your Red Hat Technical Account Manager representative for additional guidelines on tuning this important system parameter. 30.4.5. Recovering a VDO Volume After an Unclean Shutdown If a volume is restarted without having been shut down cleanly, VDO will need to rebuild a portion of its metadata to continue operating, which occurs automatically when the volume is started. (Also see Section 30.4.5.2, "Forcing a Rebuild" to invoke this process on a volume that was cleanly shut down.) Data recovery depends on the write policy of the device: If VDO was running on synchronous storage and write policy was set to sync , then all data written to the volume will be fully recovered. If the write policy was async , then some writes may not be recovered if they were not made durable by sending VDO a FLUSH command, or a write I/O tagged with the FUA flag (force unit access). This is accomplished from user mode by invoking a data integrity operation like fsync , fdatasync , sync , or umount . 30.4.5.1. Online Recovery In the majority of cases, most of the work of rebuilding an unclean VDO volume can be done after the VDO volume has come back online and while it is servicing read and write requests. Initially, the amount of space available for write requests may be limited. As more of the volume's metadata is recovered, more free space may become available. Furthermore, data written while the VDO is recovering may fail to deduplicate against data written before the crash if that data is in a portion of the volume which has not yet been recovered. Data may be compressed while the volume is being recovered. Previously compressed blocks may still be read or overwritten. During an online recovery, a number of statistics will be unavailable: for example, blocks in use and blocks free . These statistics will become available once the rebuild is complete. 30.4.5.2. Forcing a Rebuild VDO can recover from most hardware and software errors. If a VDO volume cannot be recovered successfully, it is placed in a read-only mode that persists across volume restarts. Once a volume is in read-only mode, there is no guarantee that data has not been lost or corrupted. In such cases, Red Hat recommends copying the data out of the read-only volume and possibly restoring the volume from backup. (The operating mode attribute of vdostats indicates whether a VDO volume is in read-only mode.) If the risk of data corruption is acceptable, it is possible to force an offline rebuild of the VDO volume metadata so the volume can be brought back online and made available. Again, the integrity of the rebuilt data cannot be guaranteed. To force a rebuild of a read-only VDO volume, first stop the volume if it is running: Then restart the volume using the --forceRebuild option: 30.4.6. Automatically Starting VDO Volumes at System Boot During system boot, the vdo systemd unit automatically starts all VDO devices that are configured as activated . To prevent certain existing volumes from being started automatically, deactivate those volumes by running either of these commands: To deactivate a specific volume: To deactivate all volumes: Conversely, to activate volumes, use one of these commands: To activate a specific volume: To activate all volumes: You can also create a VDO volume that does not start automatically by adding the --activate=disabled option to the vdo create command. For systems that place LVM volumes on top of VDO volumes as well as beneath them (for example, Figure 30.5, "Deduplicated Unified Storage" ), it is vital to start services in the right order: The lower layer of LVM must be started first (in most systems, starting this layer is configured automatically when the LVM2 package is installed). The vdo systemd unit must then be started. Finally, additional scripts must be run in order to start LVM volumes or other services on top of the now running VDO volumes. 30.4.7. Disabling and Re-enabling Deduplication In some instances, it may be desirable to temporarily disable deduplication of data being written to a VDO volume while still retaining the ability to read to and write from the volume. While disabling deduplication will prevent subsequent writes from being deduplicated, data which was already deduplicated will remain so. To stop deduplication on a VDO volume, use the following command: This stops the associated UDS index and informs the VDO volume that deduplication is no longer active. To restart deduplication on a VDO volume, use the following command: This restarts the associated UDS index and informs the VDO volume that deduplication is active again. You can also disable deduplication when creating a new VDO volume by adding the --deduplication=disabled option to the vdo create command. 30.4.8. Using Compression 30.4.8.1. Introduction In addition to block-level deduplication, VDO also provides inline block-level compression using the HIOPS Compression TM technology. While deduplication is the optimal solution for virtual machine environments and backup applications, compression works very well with structured and unstructured file formats that do not typically exhibit block-level redundancy, such as log files and databases. Compression operates on blocks that have not been identified as duplicates. When unique data is seen for the first time, it is compressed. Subsequent copies of data that have already been stored are deduplicated without requiring an additional compression step. The compression feature is based on a parallelized packaging algorithm that enables it to handle many compression operations at once. After first storing the block and responding to the requestor, a best-fit packing algorithm finds multiple blocks that, when compressed, can fit into a single physical block. After it is determined that a particular physical block is unlikely to hold additional compressed blocks, it is written to storage and the uncompressed blocks are freed and reused. By performing the compression and packaging operations after having already responded to the requestor, using compression imposes a minimal latency penalty. 30.4.8.2. Enabling and Disabling Compression VDO volume compression is on by default. When creating a volume, you can disable compression by adding the --compression=disabled option to the vdo create command. Compression can be stopped on an existing VDO volume if necessary to maximize performance or to speed processing of data that is unlikely to compress. To stop compression on a VDO volume, use the following command: To start it again, use the following command: 30.4.9. Managing Free Space Because VDO is a thinly provisioned block storage target, the amount of physical space VDO uses may differ from the size of the volume presented to users of the storage. Integrators and systems administrators can exploit this disparity to save on storage costs but must take care to avoid unexpectedly running out of storage space if the data written does not achieve the expected rate of deduplication. Whenever the number of logical blocks (virtual storage) exceeds the number of physical blocks (actual storage), it becomes possible for file systems and applications to unexpectedly run out of space. For that reason, storage systems using VDO must provide storage administrators with a way of monitoring the size of the VDO's free pool. The size of this free pool may be determined by using the vdostats utility; see Section 30.7.2, "vdostats" for details. The default output of this utility lists information for all running VDO volumes in a format similar to the Linux df utility. For example: When the physical storage capacity of a VDO volume is almost full, VDO reports a warning in the system log, similar to the following: If the size of VDO's free pool drops below a certain level, the storage administrator can take action by deleting data (which will reclaim space whenever the deleted data is not duplicated), adding physical storage, or even deleting LUNs. Important Monitor physical space on your VDO volumes to prevent out-of-space situations. Running out of physical blocks might result in losing recently written, unacknowledged data on the VDO volume. Reclaiming Space on File Systems VDO cannot reclaim space unless file systems communicate that blocks are free using DISCARD , TRIM , or UNMAP commands. For file systems that do not use DISCARD , TRIM , or UNMAP , free space may be manually reclaimed by storing a file consisting of binary zeros and then deleting that file. File systems may generally be configured to issue DISCARD requests in one of two ways: Realtime discard (also online discard or inline discard) When realtime discard is enabled, file systems send REQ_DISCARD requests to the block layer whenever a user deletes a file and frees space. VDO recieves these requests and returns space to its free pool, assuming the block was not shared. For file systems that support online discard, you can enable it by setting the discard option at mount time. Batch discard Batch discard is a user-initiated operation that causes the file system to notify the block layer (VDO) of any unused blocks. This is accomplished by sending the file system an ioctl request called FITRIM . You can use the fstrim utility (for example from cron ) to send this ioctl to the file system. For more information on the discard feature, see Section 2.4, "Discard Unused Blocks" . Reclaiming Space Without a File System It is also possible to manage free space when the storage is being used as a block storage target without a file system. For example, a single VDO volume can be carved up into multiple subvolumes by installing the Logical Volume Manager (LVM) on top of it. Before deprovisioning a volume, the blkdiscard command can be used in order to free the space previously used by that logical volume. LVM supports the REQ_DISCARD command and will forward the requests to VDO at the appropriate logical block addresses in order to free the space. If other volume managers are being used, they would also need to support REQ_DISCARD , or equivalently, UNMAP for SCSI devices or TRIM for ATA devices. Reclaiming Space on Fibre Channel or Ethernet Network VDO volumes (or portions of volumes) can also be provisioned to hosts on a Fibre Channel storage fabric or an Ethernet network using SCSI target frameworks such as LIO or SCST. SCSI initiators can use the UNMAP command to free space on thinly provisioned storage targets, but the SCSI target framework will need to be configured to advertise support for this command. This is typically done by enabling thin provisioning on these volumes. Support for UNMAP can be verified on Linux-based SCSI initiators by running the following command: In the output, verify that the "Maximum unmap LBA count" value is greater than zero. 30.4.10. Increasing Logical Volume Size Management applications can increase the logical size of a VDO volume using the vdo growLogical subcommand. Once the volume has been grown, the management should inform any devices or file systems on top of the VDO volume of its new size. The volume may be grown as follows: The use of this command allows storage administrators to initially create VDO volumes which have a logical size small enough to be safe from running out of space. After some period of time, the actual rate of data reduction can be evaluated, and if sufficient, the logical size of the VDO volume can be grown to take advantage of the space savings. 30.4.11. Increasing Physical Volume Size To increase the amount of physical storage available to a VDO volume: Increase the size of the underlying device. The exact procedure depends on the type of the device. For example, to resize an MBR partition, use the fdisk utility as described in Section 13.5, "Resizing a Partition with fdisk" . Use the growPhysical option to add the new physical storage space to the VDO volume: It is not possible to shrink a VDO volume with this command. 30.4.12. Automating VDO with Ansible You can use the Ansible tool to automate VDO deployment and administration. For details, see: Ansible documentation: https://docs.ansible.com/ VDO Ansible module documentation: https://docs.ansible.com/ansible/latest/modules/vdo_module.html | [
"vdo start --name= my_vdo # vdo start --all",
"vdo stop --name= my_vdo # vdo stop --all",
"vdo changeWritePolicy --writePolicy= sync|async|auto --name= vdo_name",
"cat '/sys/block/sda/device/scsi_disk/7:0:0:0/cache_type' write back",
"cat '/sys/block/sdb/device/scsi_disk/1:2:0:0/cache_type' None",
"sd 7:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA sd 1:2:0:0: [sdb] Write cache: disabled, read cache: disabled, supports DPO and FUA",
"vdo remove --name= my_vdo",
"vdo remove --force --name= my_vdo",
"[...] A previous operation failed. Recovery from the failure either failed or was interrupted. Add '--force' to 'remove' to perform the following cleanup. Steps to clean up VDO my_vdo : umount -f /dev/mapper/ my_vdo udevadm settle dmsetup remove my_vdo vdo: ERROR - VDO volume my_vdo previous operation (create) is incomplete",
"vdo stop --name= my_vdo",
"vdo start --name= my_vdo --forceRebuild",
"vdo deactivate --name= my_vdo",
"vdo deactivate --all",
"vdo activate --name= my_vdo",
"vdo activate --all",
"vdo disableDeduplication --name= my_vdo",
"vdo enableDeduplication --name= my_vdo",
"vdo disableCompression --name= my_vdo",
"vdo enableCompression --name= my_vdo",
"Device 1K-blocks Used Available Use% /dev/mapper/ my_vdo 211812352 105906176 105906176 50%",
"Oct 2 17:13:39 system lvm[13863]: Monitoring VDO pool my_vdo. Oct 2 17:27:39 system lvm[13863]: WARNING: VDO pool my_vdo is now 80.69% full. Oct 2 17:28:19 system lvm[13863]: WARNING: VDO pool my_vdo is now 85.25% full. Oct 2 17:29:39 system lvm[13863]: WARNING: VDO pool my_vdo is now 90.64% full. Oct 2 17:30:29 system lvm[13863]: WARNING: VDO pool my_vdo is now 96.07% full.",
"sg_vpd --page=0xb0 /dev/ device",
"vdo growLogical --name= my_vdo --vdoLogicalSize= new_logical_size",
"vdo growPhysical --name= my_vdo"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/vdo-ig-administering-vdo |
11.2. Using the QEMU Guest Agent with libvirt | 11.2. Using the QEMU Guest Agent with libvirt Installing the QEMU guest agent allows various libvirt commands to become more powerful. The guest agent enhances the following virsh commands: virsh shutdown --mode=agent - This shutdown method is more reliable than virsh shutdown --mode=acpi , as virsh shutdown used with the QEMU guest agent is guaranteed to shut down a cooperative guest in a clean state. If the agent is not present, libvirt must instead rely on injecting an ACPI shutdown event, but some guests ignore that event and thus will not shut down. Can be used with the same syntax for virsh reboot . virsh snapshot-create --quiesce - Allows the guest to flush its I/O into a stable state before the snapshot is created, which allows use of the snapshot without having to perform a fsck or losing partial database transactions. The guest agent allows a high level of disk contents stability by providing guest co-operation. virsh domfsfreeze and virsh domfsthaw - Quiesces the guest filesystem in isolation. virsh domfstrim - Instructs the guest to trim its filesystem. virsh domtime - Queries or sets the guest's clock. virsh setvcpus --guest - Instructs the guest to take CPUs offline. virsh domifaddr --source agent - Queries the guest operating system's IP address via the guest agent. virsh domfsinfo - Shows a list of mounted filesystems within the running guest. virsh set-user-password - Sets the password for a user account in the guest. 11.2.1. Creating a Guest Disk Backup libvirt can communicate with qemu-guest-agent to ensure that snapshots of guest virtual machine file systems are consistent internally and ready to use as needed. Guest system administrators can write and install application-specific freeze/thaw hook scripts. Before freezing the filesystems, the qemu-guest-agent invokes the main hook script (included in the qemu-guest-agent package). The freezing process temporarily deactivates all guest virtual machine applications. The snapshot process is comprised of the following steps: File system applications / databases flush working buffers to the virtual disk and stop accepting client connections Applications bring their data files into a consistent state Main hook script returns qemu-guest-agent freezes the filesystems and the management stack takes a snapshot Snapshot is confirmed Filesystem function resumes Thawing happens in reverse order. To create a snapshot of the guest's file system, run the virsh snapshot-create --quiesce --disk-only command (alternatively, run virsh snapshot-create-as guest_name --quiesce --disk-only , explained in further detail in Section 20.39.2, "Creating a Snapshot for the Current Guest Virtual Machine" ). Note An application-specific hook script might need various SELinux permissions in order to run correctly, as is done when the script needs to connect to a socket in order to talk to a database. In general, local SELinux policies should be developed and installed for such purposes. Accessing file system nodes should work out of the box, after issuing the restorecon -FvvR command listed in Table 11.1, "QEMU guest agent package contents" in the table row labeled /etc/qemu-ga/fsfreeze-hook.d/ . The qemu-guest-agent binary RPM includes the following files: Table 11.1. QEMU guest agent package contents File name Description /usr/lib/systemd/system/qemu-guest-agent.service Service control script (start/stop) for the QEMU guest agent. /etc/sysconfig/qemu-ga Configuration file for the QEMU guest agent, as it is read by the /usr/lib/systemd/system/qemu-guest-agent.service control script. The settings are documented in the file with shell script comments. /usr/bin/qemu-ga QEMU guest agent binary file. /etc/qemu-ga Root directory for hook scripts. /etc/qemu-ga/fsfreeze-hook Main hook script. No modifications are needed here. /etc/qemu-ga/fsfreeze-hook.d Directory for individual, application-specific hook scripts. The guest system administrator should copy hook scripts manually into this directory, ensure proper file mode bits for them, and then run restorecon -FvvR on this directory. /usr/share/qemu-kvm/qemu-ga/ Directory with sample scripts (for example purposes only). The scripts contained here are not executed. The main hook script, /etc/qemu-ga/fsfreeze-hook logs its own messages, as well as the application-specific script's standard output and error messages, in the following log file: /var/log/qemu-ga/fsfreeze-hook.log . For more information, see the libvirt upstream website . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-using_the_qemu_guest_virtual_machine_agent_protocol_cli-libvirt_commands |
Chapter 38. Google BigQuery | Chapter 38. Google BigQuery Since Camel 2.20 Only producer is supported . The Google Bigquery component provides access to the Cloud BigQuery Infrastructure via the link:https://developers.google.com/api-client-library/java/apis/bigquery/v2 [Google Client Services API]. The current implementation does not use gRPC. The current implementation does not support querying BigQuery, it is only a producer. 38.1. Dependencies When using google-bigquery with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-google-bigquery-starter</artifactId> </dependency> 38.2. Authentication Configuration Google BigQuery component authentication is targeted for use with the GCP Service Accounts. For more information please refer to Google Cloud Platform Auth Guide . Google security credentials can be set explicitly by providing the path to the GCP credentials file location. Or they are set implicitly, where the connection factory falls back on Application Default Credentials . When you have the service account key you can provide authentication credentials to your application code. Google security credentials can be set through the component endpoint: String endpoint = "google-bigquery://project-id:datasetId[:tableId]?serviceAccountKey=/home/user/Downloads/my-key.json"; You can also use the base64 encoded content of the authentication credentials file if you don't want to set a file system path. String endpoint = "google-bigquery://project-id:datasetId[:tableId]?serviceAccountKey=base64:<base64 encoded>"; Or by setting the environment variable GOOGLE_APPLICATION_CREDENTIALS : 38.3. URI Format 38.4. Configuring Options Camel components are configured on two levels: Component level Endpoint level 38.4.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 38.4.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 38.5. Component Options The Google BigQuery component supports 5 options, which are listed below. Name Description Default Type connectionFactory (producer) Autowired ConnectionFactory to obtain connection to Bigquery Service. If not provided the default one will be used. GoogleBigQueryConnectionFactory datasetId (producer) BigQuery Dataset Id. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean projectId (producer) Google Cloud Project Id. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 38.6. Endpoint Options The Google BigQuery endpoint is configured using URI syntax: with the following path and query parameters: 38.6.1. Path Parameters (3 parameters) Name Description Default Type projectId (common) Required Google Cloud Project Id. String datasetId (common) Required BigQuery Dataset Id. String tableId (common) BigQuery table id. String 38.6.2. Query Parameters (4 parameters) Name Description Default Type connectionFactory (producer) Autowired ConnectionFactory to obtain connection to Bigquery Service. If not provided the default one will be used. GoogleBigQueryConnectionFactory useAsInsertId (producer) Field name to use as insert id. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean serviceAccountKey (security) Service account key in json format to authenticate an application as a service account to google cloud platform. String 38.7. Message Headers The Google BigQuery component supports 4 message header(s), which is/are listed below: Name Description Default Type CamelGoogleBigQueryTableSuffix (producer) Constant: TABLE_SUFFIX Table suffix to use when inserting data. String CamelGoogleBigQueryTableId (producer) Constant: TABLE_ID Table id where data will be submitted. If specified will override endpoint configuration. String CamelGoogleBigQueryInsertId (producer) Constant: INSERT_ID InsertId to use when inserting data. String CamelGoogleBigQueryPartitionDecorator (producer) Constant: PARTITION_DECORATOR Partition decorator to indicate partition to use when inserting data. String 38.8. Producer Endpoints Producer endpoints can accept and deliver to BigQuery individual and grouped exchanges alike. Grouped exchanges have Exchange.GROUPED_EXCHANGE property set. Google BigQuery producer will send a grouped exchange in a single api call unless different table suffix or partition decorators are specified in which case it will break it down to ensure data is written with the correct suffix or partition decorator. Google BigQuery endpoint expects the payload to be either a map or list of maps. A payload containing a map will insert a single row and a payload containing a list of map's will insert a row for each entry in the list. 38.9. Template tables Templated tables can be specified using the GoogleBigQueryConstants.TABLE_SUFFIX header. For example, the following route will create tables and insert records sharded on a per day basis: from("direct:start") .header(GoogleBigQueryConstants.TABLE_SUFFIX, "_USD{date:now:yyyyMMdd}") .to("google-bigquery:sampleDataset:sampleTable") Note It is recommended to use partitioning for this use case. For more information about Template table, see Template Tables . 38.10. Partitioning Partitioning is specified when creating a table and if set data will be automatically partitioned into separate tables. When inserting data a specific partition can be specified by setting the GoogleBigQueryConstants.PARTITION_DECORATOR header on the exchange. For more information about partitioning, see Creating partitioned tables . 38.11. Ensuring data consistency An insert id can be set on the exchange with the header GoogleBigQueryConstants.INSERT_ID or by specifying query parameter useAsInsertId . As an insert id need to be specified per row, the inserted the exchange header can't be used when the payload is a list. If the payload is a list then the GoogleBigQueryConstants.INSERT_ID will be ignored. In that case use the query parameter useAsInsertId . For more information, see Data consistency 38.12. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.google-bigquery-sql.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.google-bigquery-sql.connection-factory ConnectionFactory to obtain connection to Bigquery Service. If not provided the default one will be used. The option is a org.apache.camel.component.google.bigquery.GoogleBigQueryConnectionFactory type. GoogleBigQueryConnectionFactory camel.component.google-bigquery-sql.enabled Whether to enable auto configuration of the google-bigquery-sql component. This is enabled by default. Boolean camel.component.google-bigquery-sql.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.google-bigquery-sql.project-id Google Cloud Project Id. String camel.component.google-bigquery.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.google-bigquery.connection-factory ConnectionFactory to obtain connection to Bigquery Service. If not provided the default one will be used. The option is a org.apache.camel.component.google.bigquery.GoogleBigQueryConnectionFactory type. GoogleBigQueryConnectionFactory camel.component.google-bigquery.dataset-id BigQuery Dataset Id. String camel.component.google-bigquery.enabled Whether to enable auto configuration of the google-bigquery component. This is enabled by default. Boolean camel.component.google-bigquery.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.google-bigquery.project-id Google Cloud Project Id. String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-google-bigquery-starter</artifactId> </dependency>",
"String endpoint = \"google-bigquery://project-id:datasetId[:tableId]?serviceAccountKey=/home/user/Downloads/my-key.json\";",
"String endpoint = \"google-bigquery://project-id:datasetId[:tableId]?serviceAccountKey=base64:<base64 encoded>\";",
"export GOOGLE_APPLICATION_CREDENTIALS=\"/home/user/Downloads/my-key.json\"",
"google-bigquery://project-id:datasetId[:tableId]?[options]",
"google-bigquery:projectId:datasetId:tableId",
"from(\"direct:start\") .header(GoogleBigQueryConstants.TABLE_SUFFIX, \"_USD{date:now:yyyyMMdd}\") .to(\"google-bigquery:sampleDataset:sampleTable\")"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-google-bigquery-component-starter |
Chapter 4. Networking Operators overview | Chapter 4. Networking Operators overview OpenShift Container Platform supports multiple types of networking Operators. You can manage the cluster networking using these networking Operators. 4.1. Cluster Network Operator The Cluster Network Operator (CNO) deploys and manages the cluster network components in an OpenShift Container Platform cluster. This includes deployment of the Container Network Interface (CNI) network plugin selected for the cluster during installation. For more information, see Cluster Network Operator in OpenShift Container Platform . 4.2. DNS Operator The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods. This enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. For more information, see DNS Operator in OpenShift Container Platform . 4.3. Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to external clients. The Ingress Operator implements the Ingress Controller API and is responsible for enabling external access to OpenShift Container Platform cluster services. For more information, see Ingress Operator in OpenShift Container Platform . 4.4. External DNS Operator The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform. For more information, see Understanding the External DNS Operator . 4.5. Ingress Node Firewall Operator The Ingress Node Firewall Operator uses an extended Berkley Packet Filter (eBPF) and eXpress Data Path (XDP) plugin to process node firewall rules, update statistics and generate events for dropped traffic. The operator manages ingress node firewall resources, verifies firewall configuration, does not allow incorrectly configured rules that can prevent cluster access, and loads ingress node firewall XDP programs to the selected interfaces in the rule's object(s). For more information, see Understanding the Ingress Node Firewall Operator 4.6. Network Observability Operator The Network Observability Operator is an optional Operator that allows cluster administrators to observe the network traffic for OpenShift Container Platform clusters. The Network Observability Operator uses the eBPF technology to create network flows. The network flows are then enriched with OpenShift Container Platform information and stored in Loki. You can view and analyze the stored network flows information in the OpenShift Container Platform console for further insight and troubleshooting. For more information, see About Network Observability Operator . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/networking-operators-overview |
Chapter 4. Managing a Red Hat Ceph Storage cluster using cephadm-ansible modules | Chapter 4. Managing a Red Hat Ceph Storage cluster using cephadm-ansible modules As a storage administrator, you can use cephadm-ansible modules in Ansible playbooks to administer your Red Hat Ceph Storage cluster. The cephadm-ansible package provides several modules that wrap cephadm calls to let you write your own unique Ansible playbooks to administer your cluster. Note At this time, cephadm-ansible modules only support the most important tasks. Any operation not covered by cephadm-ansible modules must be completed using either the command or shell Ansible modules in your playbooks. 4.1. The cephadm-ansible modules The cephadm-ansible modules are a collection of modules that simplify writing Ansible playbooks by providing a wrapper around cephadm and ceph orch commands. You can use the modules to write your own unique Ansible playbooks to administer your cluster using one or more of the modules. The cephadm-ansible package includes the following modules: cephadm_bootstrap ceph_orch_host ceph_config ceph_orch_apply ceph_orch_daemon cephadm_registry_login 4.2. The cephadm-ansible modules options The following tables list the available options for the cephadm-ansible modules. Options listed as required need to be set when using the modules in your Ansible playbooks. Options listed with a default value of true indicate that the option is automatically set when using the modules and you do not need to specify it in your playbook. For example, for the cephadm_bootstrap module, the Ceph Dashboard is installed unless you set dashboard: false . Table 4.1. Available options for the cephadm_bootstrap module. cephadm_bootstrap Description Required Default mon_ip Ceph Monitor IP address. true image Ceph container image. false docker Use docker instead of podman . false fsid Define the Ceph FSID. false pull Pull the Ceph container image. false true dashboard Deploy the Ceph Dashboard. false true dashboard_user Specify a specific Ceph Dashboard user. false dashboard_password Ceph Dashboard password. false monitoring Deploy the monitoring stack. false true firewalld Manage firewall rules with firewalld. false true allow_overwrite Allow overwrite of existing --output-config, --output-keyring, or --output-pub-ssh-key files. false false registry_url URL for custom registry. false registry_username Username for custom registry. false registry_password Password for custom registry. false registry_json JSON file with custom registry login information. false ssh_user SSH user to use for cephadm ssh to hosts. false ssh_config SSH config file path for cephadm SSH client. false allow_fqdn_hostname Allow hostname that is a fully-qualified domain name (FQDN). false false cluster_network Subnet to use for cluster replication, recovery and heartbeats. false Table 4.2. Available options for the ceph_orch_host module. ceph_orch_host Description Required Default fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false name Name of the host to add, remove, or update. true address IP address of the host. true when state is present . set_admin_label Set the _admin label on the specified host. false false labels The list of labels to apply to the host. false [] state If set to present , it ensures the name specified in name is present. If set to absent , it removes the host specified in name . If set to drain , it schedules to remove all daemons from the host specified in name . false present Table 4.3. Available options for the ceph_config module ceph_config Description Required Default fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false action Whether to set or get the parameter specified in option . false set who Which daemon to set the configuration to. true option Name of the parameter to set or get . true value Value of the parameter to set. true if action is set Table 4.4. Available options for the ceph_orch_apply module. ceph_orch_apply Description Required fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false spec The service specification to apply. true Table 4.5. Available options for the ceph_orch_daemon module. ceph_orch_daemon Description Required fsid The FSID of the Ceph cluster to interact with. false image The Ceph container image to use. false state The desired state of the service specified in name . true If started , it ensures the service is started. If stopped , it ensures the service is stopped. If restarted , it will restart the service. daemon_id The ID of the service. true daemon_type The type of service. true Table 4.6. Available options for the cephadm_registry_login module cephadm_registry_login Description Required Default state Login or logout of a registry. false login docker Use docker instead of podman . false registry_url The URL for custom registry. false registry_username Username for custom registry. true when state is login . registry_password Password for custom registry. true when state is login . registry_json The path to a JSON file. This file must be present on remote hosts prior to running this task. This option is currently not supported. 4.3. Bootstrapping a storage cluster using the cephadm_bootstrap and cephadm_registry_login modules As a storage administrator, you can bootstrap a storage cluster using Ansible by using the cephadm_bootstrap and cephadm_registry_login modules in your Ansible playbook. Prerequisites An IP address for the first Ceph Monitor container, which is also the IP address for the first node in the storage cluster. Login access to registry.redhat.io . A minimum of 10 GB of free space for /var/lib/containers/ . Red Hat Enterprise Linux 8.10 or 9.4 with ansible-core bundled into AppStream. Installation of the cephadm-ansible package on the Ansible administration node. Passwordless SSH is set up on all hosts in the storage cluster. Hosts are registered with CDN. Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create the hosts file and add hosts, labels, and monitor IP address of the first host in the storage cluster: Syntax Example Run the preflight playbook: Syntax Example Create a playbook to bootstrap your cluster: Syntax Example Run the playbook: Syntax Example Verification Review the Ansible output after running the playbook. 4.4. Adding or removing hosts using the ceph_orch_host module As a storage administrator, you can add and remove hosts in your storage cluster by using the ceph_orch_host module in your Ansible playbook. Prerequisites A running Red Hat Ceph Storage cluster. Register the nodes to the CDN and attach subscriptions. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. New hosts have the storage cluster's public SSH key. For more information about copying the storage cluster's public SSH keys to new hosts, see Adding hosts . Procedure Use the following procedure to add new hosts to the cluster: Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Add the new hosts and labels to the Ansible inventory file. Syntax Example Run the preflight playbook with the --limit option: Syntax Example The preflight playbook installs podman , lvm2 , chrony , and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory. Create a playbook to add the new hosts to the cluster: Syntax Note By default, Ansible executes all tasks on the host that matches the hosts line of your playbook. The ceph orch commands must run on the host that contains the admin keyring and the Ceph configuration file. Use the delegate_to keyword to specify the admin host in your cluster. Example In this example, the playbook adds the new hosts to the cluster and displays a current list of hosts. Run the playbook to add additional hosts to the cluster: Syntax Example Use the following procedure to remove hosts from the cluster: Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook to remove a host or hosts from the cluster: Syntax Example In this example, the playbook tasks drain all daemons on host07 , removes the host from the cluster, and displays a current list of hosts. Run the playbook to remove host from the cluster: Syntax Example Verification Review the Ansible task output displaying the current list of hosts in the cluster: Example 4.5. Setting configuration options using the ceph_config module As a storage administrator, you can set or get Red Hat Ceph Storage configuration options using the ceph_config module. Prerequisites A running Red Hat Ceph Storage cluster. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. The Ansible inventory file contains the cluster and admin hosts. For more information about adding hosts to your storage cluster, see Adding or removing hosts using the ceph_orch_host module . Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook with configuration changes: Syntax Example In this example, the playbook first sets the mon_allow_pool_delete option to false . The playbook then gets the current mon_allow_pool_delete setting and displays the value in the Ansible output. Run the playbook: Syntax Example Verification Review the output from the playbook tasks. Example Additional Resources See the Red Hat Ceph Storage Configuration Guide for more details on configuration options. 4.6. Applying a service specification using the ceph_orch_apply module As a storage administrator, you can apply service specifications to your storage cluster using the ceph_orch_apply module in your Ansible playbooks. A service specification is a data structure to specify the service attributes and configuration settings that is used to deploy the Ceph service. You can use a service specification to deploy Ceph service types like mon , crash , mds , mgr , osd , rdb , or rbd-mirror . Prerequisites A running Red Hat Ceph Storage cluster. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. The Ansible inventory file contains the cluster and admin hosts. For more information about adding hosts to your storage cluster, see Adding or removing hosts using the ceph_orch_host module . Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook with the service specifications: Syntax Example In this example, the playbook deploys the Ceph OSD service on all hosts with the label osd . Run the playbook: Syntax Example Verification Review the output from the playbook tasks. Additional Resources See the Red Hat Ceph Storage Operations Guide for more details on service specification options. 4.7. Managing Ceph daemon states using the ceph_orch_daemon module As a storage administrator, you can start, stop, and restart Ceph daemons on hosts using the ceph_orch_daemon module in your Ansible playbooks. Prerequisites A running Red Hat Ceph Storage cluster. Ansible user with sudo and passwordless SSH access to all nodes in the storage cluster. Installation of the cephadm-ansible package on the Ansible administration node. The Ansible inventory file contains the cluster and admin hosts. For more information about adding hosts to your storage cluster, see Adding or removing hosts using the ceph_orch_host module . Procedure Log in to the Ansible administration node. Navigate to the /usr/share/cephadm-ansible directory on the Ansible administration node: Example Create a playbook with daemon state changes: Syntax Example In this example, the playbook starts the OSD with an ID of 0 and stops a Ceph Monitor with an id of host02 . Run the playbook: Syntax Example Verification Review the output from the playbook tasks. | [
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi INVENTORY_FILE HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address=10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\"",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: BOOTSTRAP_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: -name: NAME_OF_TASK cephadm_registry_login: state: STATE registry_url: REGISTRY_URL registry_username: REGISTRY_USER_NAME registry_password: REGISTRY_PASSWORD - name: NAME_OF_TASK cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: DASHBOARD_USER dashboard_password: DASHBOARD_PASSWORD allow_fqdn_hostname: ALLOW_FQDN_HOSTNAME cluster_network: NETWORK_CIDR",
"[ceph-admin@admin cephadm-ansible]USD sudo vi bootstrap.yml --- - name: bootstrap the cluster hosts: host01 become: true gather_facts: false tasks: - name: login to registry cephadm_registry_login: state: login registry_url: registry.redhat.io registry_username: user1 registry_password: mypassword1 - name: bootstrap initial cluster cephadm_bootstrap: mon_ip: \"{{ monitor_address }}\" dashboard_user: mydashboarduser dashboard_password: mydashboardpassword allow_fqdn_hostname: true cluster_network: 10.10.128.0/28",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml -vvv",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts bootstrap.yml -vvv",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi INVENTORY_FILE NEW_HOST1 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST2 labels=\"[' LABEL1 ', ' LABEL2 ']\" NEW_HOST3 labels=\"[' LABEL1 ']\" [admin] ADMIN_HOST monitor_address= MONITOR_IP_ADDRESS labels=\"[' ADMIN_LABEL ', ' LABEL1 ', ' LABEL2 ']\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi hosts host02 labels=\"['mon', 'mgr']\" host03 labels=\"['mon', 'mgr']\" host04 labels=\"['osd']\" host05 labels=\"['osd']\" host06 labels=\"['osd']\" [admin] host01 monitor_address= 10.10.128.68 labels=\"['_admin', 'mon', 'mgr']\"",
"ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit NEWHOST",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin=rhcs\" --limit host02",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: HOST_TO_DELEGATE_TASK_TO - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: CEPH_COMMAND_TO_RUN register: REGISTER_NAME - name: NAME_OF_TASK when: inventory_hostname in groups['admin'] debug: msg: \"{{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi add-hosts.yml --- - name: add additional hosts to the cluster hosts: all become: true gather_facts: true tasks: - name: add hosts to the cluster ceph_orch_host: name: \"{{ ansible_facts['hostname'] }}\" address: \"{{ ansible_facts['default_ipv4']['address'] }}\" labels: \"{{ labels }}\" delegate_to: host01 - name: list hosts in the cluster when: inventory_hostname in groups['admin'] ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts when: inventory_hostname in groups['admin'] debug: msg: \"{{ host_list.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts add-hosts.yml",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: NAME_OF_PLAY hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE - name: NAME_OF_TASK ceph_orch_host: name: HOST_TO_REMOVE state: STATE retries: NUMBER_OF_RETRIES delay: DELAY until: CONTINUE_UNTIL register: REGISTER_NAME - name: NAME_OF_TASK ansible.builtin.shell: cmd: ceph orch host ls register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \"{{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi remove-hosts.yml --- - name: remove host hosts: host01 become: true gather_facts: true tasks: - name: drain host07 ceph_orch_host: name: host07 state: drain - name: remove host from the cluster ceph_orch_host: name: host07 state: absent retries: 20 delay: 1 until: result is succeeded register: result - name: list hosts in the cluster ansible.builtin.shell: cmd: ceph orch host ls register: host_list - name: print current list of hosts debug: msg: \"{{ host_list.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts remove-hosts.yml",
"TASK [print current hosts] ****************************************************************************************************** Friday 24 June 2022 14:52:40 -0400 (0:00:03.365) 0:02:31.702 *********** ok: [host01] => msg: |- HOST ADDR LABELS STATUS host01 10.10.128.68 _admin mon mgr host02 10.10.128.69 mon mgr host03 10.10.128.70 mon mgr host04 10.10.128.71 osd host05 10.10.128.72 osd host06 10.10.128.73 osd",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION value: VALUE_OF_PARAMETER_TO_SET - name: NAME_OF_TASK ceph_config: action: GET_OR_SET who: DAEMON_TO_SET_CONFIGURATION_TO option: CEPH_CONFIGURATION_OPTION register: REGISTER_NAME - name: NAME_OF_TASK debug: msg: \" MESSAGE_TO_DISPLAY {{ REGISTER_NAME .stdout }}\"",
"[ceph-admin@admin cephadm-ansible]USD sudo vi change_configuration.yml --- - name: set pool delete hosts: host01 become: true gather_facts: false tasks: - name: set the allow pool delete option ceph_config: action: set who: mon option: mon_allow_pool_delete value: true - name: get the allow pool delete setting ceph_config: action: get who: mon option: mon_allow_pool_delete register: verify_mon_allow_pool_delete - name: print current mon_allow_pool_delete setting debug: msg: \"the value of 'mon_allow_pool_delete' is {{ verify_mon_allow_pool_delete.stdout }}\"",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts change_configuration.yml",
"TASK [print current mon_allow_pool_delete setting] ************************************************************* Wednesday 29 June 2022 13:51:41 -0400 (0:00:05.523) 0:00:17.953 ******** ok: [host01] => msg: the value of 'mon_allow_pool_delete' is true",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: HOSTS_OR_HOST_GROUPS become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_apply: spec: | service_type: SERVICE_TYPE service_id: UNIQUE_NAME_OF_SERVICE placement: host_pattern: ' HOST_PATTERN_TO_SELECT_HOSTS ' label: LABEL spec: SPECIFICATION_OPTIONS :",
"[ceph-admin@admin cephadm-ansible]USD sudo vi deploy_osd_service.yml --- - name: deploy osd service hosts: host01 become: true gather_facts: true tasks: - name: apply osd spec ceph_orch_apply: spec: | service_type: osd service_id: osd placement: host_pattern: '*' label: osd spec: data_devices: all: true",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts deploy_osd_service.yml",
"[ceph-admin@admin ~]USD cd /usr/share/cephadm-ansible",
"sudo vi PLAYBOOK_FILENAME .yml --- - name: PLAY_NAME hosts: ADMIN_HOST become: USE_ELEVATED_PRIVILEGES gather_facts: GATHER_FACTS_ABOUT_REMOTE_HOSTS tasks: - name: NAME_OF_TASK ceph_orch_daemon: state: STATE_OF_SERVICE daemon_id: DAEMON_ID daemon_type: TYPE_OF_SERVICE",
"[ceph-admin@admin cephadm-ansible]USD sudo vi restart_services.yml --- - name: start and stop services hosts: host01 become: true gather_facts: false tasks: - name: start osd.0 ceph_orch_daemon: state: started daemon_id: 0 daemon_type: osd - name: stop mon.host02 ceph_orch_daemon: state: stopped daemon_id: host02 daemon_type: mon",
"ansible-playbook -i INVENTORY_FILE _PLAYBOOK_FILENAME .yml",
"[ceph-admin@admin cephadm-ansible]USD ansible-playbook -i hosts restart_services.yml"
] | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/installation_guide/managing-a-red-hat-ceph-storage-cluster-using-cephadm-ansible-modules |
Chapter 12. Adding and removing Kafka brokers and ZooKeeper nodes | Chapter 12. Adding and removing Kafka brokers and ZooKeeper nodes In a Kafka cluster, managing the addition and removal of brokers and ZooKeeper nodes is critical to maintaining a stable and scalable system. When you add to the number of available brokers, you can configure the default replication factor and minimum in-sync replicas for topics across the brokers. You can use dynamic reconfiguration to add and remove ZooKeeper nodes from an ensemble without disruption. 12.1. Scaling clusters by adding or removing brokers Scaling Kafka clusters by adding brokers can increase the performance and reliability of the cluster. Adding more brokers increases available resources, allowing the cluster to handle larger workloads and process more messages. It can also improve fault tolerance by providing more replicas and backups. Conversely, removing underutilized brokers can reduce resource consumption and improve efficiency. Scaling must be done carefully to avoid disruption or data loss. By redistributing partitions across all brokers in the cluster, the resource utilization of each broker is reduced, which can increase the overall throughput of the cluster. Note To increase the throughput of a Kafka topic, you can increase the number of partitions for that topic. This allows the load of the topic to be shared between different brokers in the cluster. However, if every broker is constrained by a specific resource (such as I/O), adding more partitions will not increase the throughput. In this case, you need to add more brokers to the cluster. Adding brokers when running a multi-node Kafka cluster affects the number of brokers in the cluster that act as replicas. The actual replication factor for topics is determined by settings for the default.replication.factor and min.insync.replicas , and the number of available brokers. For example, a replication factor of 3 means that each partition of a topic is replicated across three brokers, ensuring fault tolerance in the event of a broker failure. Example replica configuration default.replication.factor = 3 min.insync.replicas = 2 When you add or remove brokers, Kafka does not automatically reassign partitions. The best way to do this is using Cruise Control. You can use Cruise Control's add-brokers and remove-brokers modes when scaling a cluster up or down. Use the add-brokers mode after scaling up a Kafka cluster to move partition replicas from existing brokers to the newly added brokers. Use the remove-brokers mode before scaling down a Kafka cluster to move partition replicas off the brokers that are going to be removed. Note When scaling down brokers, you cannot specify which specific pod to remove from the cluster. Instead, the broker removal process starts from the highest numbered pod. 12.2. Adding nodes to a ZooKeeper cluster Use dynamic reconfiguration to add nodes from a ZooKeeper cluster without stopping the entire cluster. Dynamic Reconfiguration allows ZooKeeper to change the membership of a set of nodes that make up the ZooKeeper cluster without interruption. Prerequisites Dynamic reconfiguration is enabled in the ZooKeeper configuration file ( reconfigEnabled=true ). ZooKeeper authentication is enabled and you can access the new server using the authentication mechanism. Procedure Perform the following steps for each ZooKeeper server you are adding, one at a time: Add a server to the ZooKeeper cluster as described in Section 4.1, "Running a multi-node ZooKeeper cluster" and then start ZooKeeper. Note the IP address and configured access ports of the new server. Start a zookeeper-shell session for the server. Run the following command from a machine that has access to the cluster (this might be one of the ZooKeeper nodes or your local machine, if it has access). su - kafka /opt/kafka/bin/zookeeper-shell.sh <ip-address>:<zk-port> In the shell session, with the ZooKeeper node running, enter the following line to add the new server to the quorum as a voting member: reconfig -add server.<positive-id> = <address1>:<port1>:<port2>[:role];[<client-port-address>:]<client-port> For example: reconfig -add server.4=172.17.0.4:2888:3888:participant;172.17.0.4:2181 Where <positive-id> is the new server ID 4 . For the two ports, <port1> 2888 is for communication between ZooKeeper servers, and <port2> 3888 is for leader election. The new configuration propagates to the other servers in the ZooKeeper cluster; the new server is now a full member of the quorum. 12.3. Removing nodes from a ZooKeeper cluster Use dynamic reconfiguration to remove nodes from a ZooKeeper cluster without stopping the entire cluster. Dynamic Reconfiguration allows ZooKeeper to change the membership of a set of nodes that make up the ZooKeeper cluster without interruption. Prerequisites Dynamic reconfiguration is enabled in the ZooKeeper configuration file ( reconfigEnabled=true ). ZooKeeper authentication is enabled and you can access the new server using the authentication mechanism. Procedure Perform the following steps, one at a time, for each ZooKeeper server you remove: Log in to the zookeeper-shell on one of the servers that will be retained after the scale down (for example, server 1). Note Access the server using the authentication mechanism configured for the ZooKeeper cluster. Remove a server, for example server 5. Deactivate the server that you removed. | [
"default.replication.factor = 3 min.insync.replicas = 2",
"su - kafka /opt/kafka/bin/zookeeper-shell.sh <ip-address>:<zk-port>",
"reconfig -add server.<positive-id> = <address1>:<port1>:<port2>[:role];[<client-port-address>:]<client-port>",
"reconfig -add server.4=172.17.0.4:2888:3888:participant;172.17.0.4:2181",
"reconfig -remove 5"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_with_zookeeper/assembly-scaling-clusters-str |
Chapter 15. Deleting the Bootstrap User | Chapter 15. Deleting the Bootstrap User Important Before you delete the bootstrap user, create a real PKI administrative user as described in Chapter 14, Creating a role user . To delete the bootstrap user, follow the procedure described in 11.3.2.4 Deleting a Certificate System User in the Administration Guide (Common Criteria Edition) . 15.1. Disabling multi-roles support By default, users can belong to more than one subsystem group at once, allowing the user to act as more than one role. For example, John Smith could belong to both an agent and an administrator group. However, for highly secure environments, the subsystem roles should be restricted so that a user can only belong to one role. This can be done by disabling the multirole attribute in the instance's configuration. For all subsystems: Stop the server: OR if using the Nuxwdog watchdog: Open the CS.cfg file: Change the multiroles.enable parameter value from true to false . Add or edit the list of default roles in Certificate System that are affected by the multi-roles setting. If multi-roles is disabled and a user belongs to one of the roles listed in the multiroles.false.groupEnforceList parameter, then the user cannot be added to any group for any of the other roles in the list. Restart the server: OR if using the Nuxwdog watchdog: | [
"systemctl stop pki-tomcatd@instance_name.service",
"systemctl stop pki-tomcatd-nuxwdog@instance_name.service",
"vim /var/lib/pki/instance_name/ca/conf/CS.cfg",
"multiroles.false.groupEnforceList=Administrators,Auditors,Trusted Managers,Certificate Manager Agents,Registration Manager Agents,Key Recovery Authority Agents,Online Certificate Status Manager Agents,Token Key Service Manager Agents,Enterprise CA Administrators,Enterprise KRA Adminstrators,Enterprise OCSP Administrators,Enterprise TKS Administrators,Enterprise TPS Administrators,Security Domain Administrators,Subsystem Group",
"systemctl start pki-tomcatd@instance_name.service",
"systemctl start pki-tomcatd-nuxwdog@instance_name.service"
] | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/deleting_the_bootstrap_user |
Chapter 9. Premigration checklists | Chapter 9. Premigration checklists Before you migrate your application workloads with the Migration Toolkit for Containers (MTC), review the following checklists. 9.1. Resources β If your application uses an internal service network or an external route for communicating with services, the relevant route exists. β If your application uses cluster-level resources, you have re-created them on the target cluster. β You have excluded persistent volumes (PVs), image streams, and other resources that you do not want to migrate. β PV data has been backed up in case an application displays unexpected behavior after migration and corrupts the data. 9.2. Source cluster β The cluster meets the minimum hardware requirements . β You have installed the correct legacy Migration Toolkit for Containers Operator version: operator-3.7.yml on OpenShift Container Platform version 3.7. operator.yml on OpenShift Container Platform versions 3.9 to 4.5. β All nodes have an active OpenShift Container Platform subscription. β You have performed all the run-once tasks . β You have performed all the environment health checks . β You have checked for PVs with abnormal configurations stuck in a Terminating state by running the following command: USD oc get pv β You have checked for pods whose status is other than Running or Completed by running the following command: USD oc get pods --all-namespaces | egrep -v 'Running | Completed' β You have checked for pods with a high restart count by running the following command: USD oc get pods --all-namespaces --field-selector=status.phase=Running \ -o json | jq '.items[]|select(any( .status.containerStatuses[]; \ .restartCount > 3))|.metadata.name' Even if the pods are in a Running state, a high restart count might indicate underlying problems. β You have removed old builds, deployments, and images from each namespace to be migrated by pruning . β The internal registry uses a supported storage type . β Direct image migration only: The internal registry is exposed to external traffic. β You can read and write images to the registry. β The etcd cluster is healthy. β The average API server response time on the source cluster is less than 50 ms. β The cluster certificates are valid for the duration of the migration process. β You have checked for pending certificate-signing requests by running the following command: USD oc get csr -A | grep pending -i β The identity provider is working. 9.3. Target cluster β You have installed Migration Toolkit for Containers Operator version 1.5.1. β All MTC prerequisites are met. β The cluster meets the minimum hardware requirements for the specific platform and installation method, for example, on bare metal . β The cluster has storage classes defined for the storage types used by the source cluster, for example, block volume, file system, or object storage. Note NFS does not require a defined storage class. β The cluster has the correct network configuration and permissions to access external services, for example, databases, source code repositories, container image registries, and CI/CD tools. β External applications and services that use services provided by the cluster have the correct network configuration and permissions to access the cluster. β Internal container image dependencies are met. If an application uses an internal image in the openshift namespace that is not supported by OpenShift Container Platform 4.7, you can manually update the OpenShift Container Platform 3 image stream tag with podman . β The target cluster and the replication repository have sufficient storage space. β The identity provider is working. β DNS records for your application exist on the target cluster. β Set the value of the annotation.openshift.io/host.generated parameter to true for each OpenShift Container Platform route to update its host name for the target cluster. Otherwise, the migrated routes retain the source cluster host name. β Certificates that your application uses exist on the target cluster. β You have configured appropriate firewall rules on the target cluster. β You have correctly configured load balancing on the target cluster. β If you migrate objects to an existing namespace on the target cluster that has the same name as the namespace being migrated from the source, the target namespace contains no objects of the same name and type as the objects being migrated. Note Do not create namespaces for your application on the target cluster before migration because this might cause quotas to change. 9.4. Performance β The migration network has a minimum throughput of 10 Gbps. β The clusters have sufficient resources for migration. Note Clusters require additional memory, CPUs, and storage in order to run a migration on top of normal workloads. Actual resource requirements depend on the number of Kubernetes resources being migrated in a single migration plan. You must test migrations in a non-production environment in order to estimate the resource requirements. β The memory and CPU usage of the nodes are healthy. β The etcd disk performance of the clusters has been checked with fio . | [
"oc get pv",
"oc get pods --all-namespaces | egrep -v 'Running | Completed'",
"oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'",
"oc get csr -A | grep pending -i"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/migrating_from_version_3_to_4/premigration-checks |
Integrating | Integrating Red Hat Advanced Cluster Security for Kubernetes 4.6 Integrating Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/integrating/index |
function::user_string_n | function::user_string_n Name function::user_string_n - Retrieves string of given length from user space Synopsis Arguments addr the user space address to retrieve the string from n the maximum length of the string (if not null terminated) Description Returns the C string of a maximum given length from a given user space address. Reports an error on the rare cases when userspace data is not accessible at the given address. | [
"user_string_n:string(addr:long,n:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-user-string-n |
Chapter 25. Automating group membership using IdM CLI | Chapter 25. Automating group membership using IdM CLI Using automatic group membership allows you to assign users and hosts to groups automatically based on their attributes. For example, you can: Divide employees' user entries into groups based on the employees' manager, location, or any other attribute. Divide hosts based on their class, location, or any other attribute. Add all users or all hosts to a single global group. This chapter covers the following topics: Benefits of automatic group membership Automember rules Adding an automember rule using IdM CLI Adding a condition to an automember rule using IdM CLI Viewing existing automember rules using IdM CLI Deleting an automember rule using IdM CLI Removing a condition from an automember rule using IdM CLI Applying automember rules to existing entries using IdM CLI Configuring a default automember group using IdM CLI 25.1. Benefits of automatic group membership Using automatic membership for users allows you to: Reduce the overhead of manually managing group memberships You no longer have to assign every user and host to groups manually. Improve consistency in user and host management Users and hosts are assigned to groups based on strictly defined and automatically evaluated criteria. Simplify the management of group-based settings Various settings are defined for groups and then applied to individual group members, for example sudo rules, automount, or access control. Adding users and hosts to groups automatically makes managing these settings easier. 25.2. Automember rules When configuring automatic group membership, the administrator defines automember rules. An automember rule applies to a specific user or host target group. It cannot apply to more than one group at a time. After creating a rule, the administrator adds conditions to it. These specify which users or hosts get included or excluded from the target group: Inclusive conditions When a user or host entry meets an inclusive condition, it will be included in the target group. Exclusive conditions When a user or host entry meets an exclusive condition, it will not be included in the target group. The conditions are specified as regular expressions in the Perl-compatible regular expressions (PCRE) format. For more information about PCRE, see the pcresyntax(3) man page on your system. Note IdM evaluates exclusive conditions before inclusive conditions. In case of a conflict, exclusive conditions take precedence over inclusive conditions. An automember rule applies to every entry created in the future. These entries will be automatically added to the specified target group. If an entry meets the conditions specified in multiple automember rules, it will be added to all the corresponding groups. Existing entries are not affected by the new rule. If you want to change existing entries, see Applying automember rules to existing entries using IdM CLI . 25.3. Adding an automember rule using IdM CLI Follow this procedure to add an automember rule using the IdM CLI. For information about automember rules, see Automember rules . After adding an automember rule, you can add conditions to it using the procedure described in Adding a condition to an automember rule . Note Existing entries are not affected by the new rule. If you want to change existing entries, see Applying automember rules to existing entries using IdM CLI . Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . The target group of the new rule must exist in IdM. Procedure Enter the ipa automember-add command to add an automember rule. When prompted, specify: Automember rule . This is the target group name. Grouping Type . This specifies whether the rule targets a user group or a host group. To target a user group, enter group . To target a host group, enter hostgroup . For example, to add an automember rule for a user group named user_group : Verification You can display existing automember rules and conditions in IdM using Viewing existing automember rules using IdM CLI . 25.4. Adding a condition to an automember rule using IdM CLI After configuring automember rules, you can then add a condition to that automember rule using the IdM CLI. For information about automember rules, see Automember rules . Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . The target rule must exist in IdM. For details, see Adding an automember rule using IdM CLI . Procedure Define one or more inclusive or exclusive conditions using the ipa automember-add-condition command. When prompted, specify: Automember rule . This is the target rule name. See Automember rules for details. Attribute Key . This specifies the entry attribute to which the filter will apply. For example, uid for users. Grouping Type . This specifies whether the rule targets a user group or a host group. To target a user group, enter group . To target a host group, enter hostgroup . Inclusive regex and Exclusive regex . These specify one or more conditions as regular expressions. If you only want to specify one condition, press Enter when prompted for the other. For example, the following condition targets all users with any value (.*) in their user login attribute ( uid ). As another example, you can use an automembership rule to target all Windows users synchronized from Active Directory (AD). To achieve this, create a condition that that targets all users with ntUser in their objectClass attribute, which is shared by all AD users: Verification You can display existing automember rules and conditions in IdM using Viewing existing automember rules using IdM CLI . 25.5. Viewing existing automember rules using IdM CLI Follow this procedure to view existing automember rules using the IdM CLI. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . Procedure Enter the ipa automember-find command. When prompted, specify the Grouping type : To target a user group, enter group . To target a host group, enter hostgroup . For example: 25.6. Deleting an automember rule using IdM CLI Follow this procedure to delete an automember rule using the IdM CLI. Deleting an automember rule also deletes all conditions associated with the rule. To remove only specific conditions from a rule, see Removing a condition from an automember rule using IdM CLI . Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . Procedure Enter the ipa automember-del command. When prompted, specify: Automember rule . This is the rule you want to delete. Grouping rule . This specifies whether the rule you want to delete is for a user group or a host group. Enter group or hostgroup . 25.7. Removing a condition from an automember rule using IdM CLI Follow this procedure to remove a specific condition from an automember rule. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . Procedure Enter the ipa automember-remove-condition command. When prompted, specify: Automember rule . This is the name of the rule from which you want to remove a condition. Attribute Key . This is the target entry attribute. For example, uid for users. Grouping Type . This specifies whether the condition you want to delete is for a user group or a host group. Enter group or hostgroup . Inclusive regex and Exclusive regex . These specify the conditions you want to remove. If you only want to specify one condition, press Enter when prompted for the other. For example: 25.8. Applying automember rules to existing entries using IdM CLI Automember rules apply automatically to user and host entries created after the rules were added. They are not applied retroactively to entries that existed before the rules were added. To apply automember rules to previously added entries, you have to manually rebuild automatic membership. Rebuilding automatic membership re-evaluates all existing automember rules and applies them either to all user or hosts entries, or to specific entries. Note Rebuilding automatic membership does not remove user or host entries from groups, even if the entries no longer match the group's inclusive conditions. To remove them manually, see Removing a member from a user group using IdM CLI or Removing IdM host group members using the CLI . Prerequisites You must be logged in as the administrator. For details, see link: Using kinit to log in to IdM manually . Procedure To rebuild automatic membership, enter the ipa automember-rebuild command. Use the following options to specify the entries to target: To rebuild automatic membership for all users, use the --type=group option: To rebuild automatic membership for all hosts, use the --type=hostgroup option. To rebuild automatic membership for a specified user or users, use the --users= target_user option: To rebuild automatic membership for a specified host or hosts, use the --hosts= client.idm.example.com option. 25.9. Configuring a default automember group using IdM CLI When you configure a default automember group, new user or host entries that do not match any automember rule are automatically added to this default group. Prerequisites You must be logged in as the administrator. For details, see Using kinit to log in to IdM manually . The target group you want to set as default exists in IdM. Procedure Enter the ipa automember-default-group-set command to configure a default automember group. When prompted, specify: Default (fallback) Group , which specifies the target group name. Grouping Type , which specifies whether the target is a user group or a host group. To target a user group, enter group . To target a host group, enter hostgroup . For example: Note To remove the current default automember group, enter the ipa automember-default-group-remove command. Verification To verify that the group is set correctly, enter the ipa automember-default-group-show command. The command displays the current default automember group. For example: | [
"ipa automember-add Automember Rule: user_group Grouping Type: group -------------------------------- Added automember rule \"user_group\" -------------------------------- Automember Rule: user_group",
"ipa automember-add-condition Automember Rule: user_group Attribute Key: uid Grouping Type: group [Inclusive Regex]: .* [Exclusive Regex]: ---------------------------------- Added condition(s) to \"user_group\" ---------------------------------- Automember Rule: user_group Inclusive Regex: uid=.* ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-add-condition Automember Rule: ad_users Attribute Key: objectclass Grouping Type: group [Inclusive Regex]: ntUser [Exclusive Regex]: ------------------------------------- Added condition(s) to \"ad_users\" ------------------------------------- Automember Rule: ad_users Inclusive Regex: objectclass=ntUser ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-find Grouping Type: group --------------- 1 rules matched --------------- Automember Rule: user_group Inclusive Regex: uid=.* ---------------------------- Number of entries returned 1 ----------------------------",
"ipa automember-remove-condition Automember Rule: user_group Attribute Key: uid Grouping Type: group [Inclusive Regex]: .* [Exclusive Regex]: ----------------------------------- Removed condition(s) from \"user_group\" ----------------------------------- Automember Rule: user_group ------------------------------ Number of conditions removed 1 ------------------------------",
"ipa automember-rebuild --type=group -------------------------------------------------------- Automember rebuild task finished. Processed (9) entries. --------------------------------------------------------",
"ipa automember-rebuild --users=target_user1 --users=target_user2 -------------------------------------------------------- Automember rebuild task finished. Processed (2) entries. --------------------------------------------------------",
"ipa automember-default-group-set Default (fallback) Group: default_user_group Grouping Type: group --------------------------------------------------- Set default (fallback) group for automember \"default_user_group\" --------------------------------------------------- Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com",
"ipa automember-default-group-show Grouping Type: group Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_idm_users_groups_hosts_and_access_control_rules/automating-group-membership-using-idm-cli_managing-users-groups-hosts |
5.17. biosdevname | 5.17. biosdevname 5.17.1. RHBA-2013:0138 - biosdevname bug fix update Updated biosdevname packages that fix one bug are now available for Red Hat Enterprise Linux 6. The biosdevname packages contain an optional convention for naming network interfaces; it assigns names to network interfaces based on their physical location. Biosdevname is disabled by default, except for a limited set of Dell PowerEdge, C Series, and Precision Workstation systems. Bug Fix BZ# 865446 Previously, biosdevname did not handle PCI cards with multiple ports properly. Consequently, only the network interface of the first port of these cards was renamed according to the biosdevname naming scheme. This bug has been fixed and network interfaces of all ports of these cards are now renamed as expected. Users of biosdevname are advised to upgrade to these update packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/biosdevname |
Chapter 8. Hardware Enablement | Chapter 8. Hardware Enablement OSA-Express5s cards support in qethqoat Support for OSA-Express5s cards was added to the qethqoat tool, part of the s390utils package, in Red Hat Enterprise Linux 7.1 as a Technology Preview. This enhancement update provides full support of the extended serviceability of network and card setups for OSA-Express5s cards. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.2_release_notes/hardware_enablement |
6.4. Creating and Modifying a Cluster | 6.4. Creating and Modifying a Cluster This section describes how to create, modify, and delete a skeleton cluster configuration with the ccs command without fencing, failover domains, and HA services. Subsequent sections describe how to configure those parts of the configuration. To create a skeleton cluster configuration file, first create and name the cluster and then add the nodes to the cluster, as in the following procedure: Create a cluster configuration file on one of the nodes in the cluster by executing the ccs command using the -h parameter to specify the node on which to create the file and the createcluster option to specify a name for the cluster: For example, the following command creates a configuration file on node-01.example.com named mycluster : The cluster name cannot exceed 15 characters. If a cluster.conf file already exists on the host that you specify, use the -i option when executing this command to replace that existing file. If you want to create a cluster configuration file on your local system you can specify the -f option instead of the -h option. For information on creating the file locally, see Section 6.1.1, "Creating the Cluster Configuration File on a Local System" . To configure the nodes that the cluster contains, execute the following command for each node in the cluster. A node name can be up to 255 bytes in length. For example, the following three commands add the nodes node-01.example.com , node-02.example.com , and node-03.example.com to the configuration file on node-01.example.com : To view a list of the nodes that have been configured for a cluster, execute the following command: Example 6.1, " cluster.conf File After Adding Three Nodes" shows a cluster.conf configuration file after you have created the cluster mycluster that contains the nodes node-01.example.com , node-02.example.com , and node-03.example.com . Example 6.1. cluster.conf File After Adding Three Nodes Note When you add a node to a cluster that uses UDPU transport, you must restart all nodes in the cluster for the change to take effect. When you add a node to the cluster, you can specify the number of votes the node contributes to determine whether there is a quorum. To set the number of votes for a cluster node, use the following command: When you add a node, the ccs assigns the node a unique integer that is used as the node identifier. If you want to specify the node identifier manually when creating a node, use the following command: To remove a node from a cluster, execute the following command: When you have finished configuring all of the components of your cluster, you will need to sync the cluster configuration file to all of the nodes, as described in Section 6.15, "Propagating the Configuration File to the Cluster Nodes" . | [
"ccs -h host --createcluster clustername",
"ccs -h node-01.example.com --createcluster mycluster",
"ccs -h host --addnode node",
"ccs -h node-01.example.com --addnode node-01.example.com ccs -h node-01.example.com --addnode node-02.example.com ccs -h node-01.example.com --addnode node-03.example.com",
"ccs -h host --lsnodes",
"<cluster name=\"mycluster\" config_version=\"2\"> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> </fence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> </fence> </clusternode> </clusternodes> <fencedevices> </fencedevices> <rm> </rm> </cluster>",
"ccs -h host --addnode host --votes votes",
"ccs -h host --addnode host --nodeid nodeid",
"ccs -h host --rmnode node"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-creating-cluster-ccs-CA |
Chapter 4. Scaling storage of bare metal OpenShift Data Foundation cluster | Chapter 4. Scaling storage of bare metal OpenShift Data Foundation cluster To scale the storage capacity of your configured Red Hat OpenShift Data Foundation worker nodes on your bare metal cluster, you can increase the capacity by adding three disks at a time. Three disks are needed since OpenShift Data Foundation uses a replica count of 3 to maintain the high availability. So the amount of storage consumed is three times the usable space. Note Usable space might vary when encryption is enabled or replica 2 pools are being used. 4.1. Scaling up a cluster created using local storage devices To scale up an OpenShift Data Foundation cluster which was created using local storage devices, you need to add a new disk to the storage node. The new disks size must be of the same size as the disks used during the deployment because OpenShift Data Foundation does not support heterogeneous disks/OSDs. For deployments having three failure domains, you can scale up the storage by adding disks in the multiples of three, with the same number of disks coming from nodes in each of the failure domains. For example, if we scale by adding six disks, two disks are taken from nodes in each of the three failure domains. If the number of disks is not in multiples of three, it will only consume the disk to the maximum in the multiple of three while the remaining disks remain unused. For deployments having less than three failure domains, there is a flexibility to add any number of disks. Make sure to verify that flexible scaling is enabled. For information, refer to the Knowledgebase article Verify if flexible scaling is enabled . Note Flexible scaling features get enabled at the time of deployment and cannot be enabled or disabled later on. Prerequisites Administrative privilege to the OpenShift Container Platform Console. A running OpenShift Data Foundation Storage Cluster. Make sure that the disks to be used for scaling are attached to the storage node Make sure that LocalVolumeDiscovery and LocalVolumeSet objects are created. Procedure To add capacity, you can either use a storage class that you provisioned during the deployment or any other storage class that matches the filter. In the OpenShift Web Console, click Operators Installed Operators . Click OpenShift Data Foundation Operator. Click the Storage Systems tab. Click the Action menu (...) to the visible list to extend the options menu. Select Add Capacity from the options menu. Select the Storage Class for which you added disks or the new storage class depending on your requirement. Available Capacity displayed is based on the local disks available in storage class. Click Add . To check the status, navigate to Storage Data Foundation and verify that the Storage System in the Status card has a green tick. Verification steps Verify the Raw Capacity card. In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Block and File tab, check the Raw Capacity card. Note that the capacity increases based on your selections. Note The raw capacity does not take replication into account and shows the full capacity. Verify that the new OSDs and their corresponding new Persistent Volume Claims (PVCs) are created. To view the state of the newly created OSDs: Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. To view the state of the PVCs: Click Storage Persistent Volume Claims from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Optional: If cluster-wide encryption is enabled on the cluster, verify that the new OSD devices are encrypted. Identify the nodes where the new OSD pods are running. <OSD-pod-name> Is the name of the OSD pod. For example: Example output: For each of the nodes identified in the step, do the following: Create a debug pod and open a chroot environment for the selected host(s). <node-name> Is the name of the node. Check for the crypt keyword beside the ocs-deviceset names. Important Cluster reduction is supported only with the Red Hat Support Team's assistance. 4.2. Scaling out storage capacity on a bare metal cluster OpenShift Data Foundation is highly scalable. It can be scaled out by adding new nodes with required storage and enough hardware resources in terms of CPU and RAM. There is no limit on the number of nodes which can be added. Howerver, from the technical support perspective, 2000 nodes is the limit for OpenShift Data Foundation. Scaling out storage capacity can be broken down into two steps Adding new node Scaling up the storage capacity Note OpenShift Data Foundation does not support heterogeneous OSD/Disk sizes. 4.2.1. Adding a node You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or there are not enough resources to add new OSDs on the existing nodes. It is always recommended to add nodes in the multiple of three, each of them in different failure domains. While it is recommended to add nodes in the multiple of three, you still have the flexibility to add one node at a time in the flexible scaling deployment. Refer to the Knowledgebase article Verify if flexible scaling is enabled . Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during OpenShift Data Foundation deployment. 4.2.1.1. Adding a node to an installer-provisioned infrastructure Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Navigate to Compute Machine Sets . On the machine set where you want to add nodes, select Edit Machine Count . Add the amount of nodes, and click Save . Click Compute Nodes and confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node. For the new node, click Action menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. In case of bare metal installer-provisioned infrastructure deployment, you must expand the cluster first. For instructions, see Expanding the cluster . Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.1.2. Adding a node using a local storage device You can add nodes to increase the storage capacity when existing worker nodes are already running at their maximum supported OSDs or when there are not enough resources to add new OSDs on the existing nodes. Add nodes in the multiple of 3, each of them in different failure domains. Though it is recommended to add nodes in multiples of 3 nodes, you have the flexibility to add one node at a time in flexible scaling deployment. See Knowledgebase article Verify if flexible scaling is enabled Note OpenShift Data Foundation does not support heterogeneous disk size and types. The new nodes to be added should have the disk of the same type and size which was used during initial OpenShift Data Foundation deployment. Prerequisites You have administrative privilege to the OpenShift Container Platform Console. You have a running OpenShift Data Foundation Storage Cluster. Procedure Depending on the type of infrastructure, perform the following steps: Get a new machine with the required infrastructure. See Platform requirements . Create a new OpenShift Container Platform worker node using the new machine. Check for certificate signing requests (CSRs) that are in Pending state. Approve all the required CSRs for the new node. <Certificate_Name> Is the name of the CSR. Click Compute Nodes , confirm if the new node is in Ready state. Apply the OpenShift Data Foundation label to the new node using any one of the following: From User interface For the new node, click Action Menu (...) Edit Labels . Add cluster.ocs.openshift.io/openshift-storage , and click Save . From Command line interface Apply the OpenShift Data Foundation label to the new node. <new_node_name> Is the name of the new node. Click Operators Installed Operators from the OpenShift Web Console. From the Project drop-down list, make sure to select the project where the Local Storage Operator is installed. Click Local Storage . Click the Local Volume Discovery tab. Beside the LocalVolumeDiscovery , click Action menu (...) Edit Local Volume Discovery . In the YAML, add the hostname of the new node in the values field under the node selector. Click Save . Click the Local Volume Sets tab. Beside the LocalVolumeSet , click Action menu (...) Edit Local Volume Set . In the YAML, add the hostname of the new node in the values field under the node selector . Click Save . Note It is recommended to add 3 nodes, one each in different zones. You must add 3 nodes and perform this procedure for all of them. Verification steps Execute the following command in the terminal and verify that the new node is present in the output: On the OpenShift web console, click Workloads Pods , confirm that at least the following pods on the new node are in Running state: csi-cephfsplugin-* csi-rbdplugin-* 4.2.2. Scaling up storage capacity To scale up storage capacity, see Scaling up storage by adding capacity . | [
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/ <OSD-pod-name>",
"oc get -n openshift-storage -o=custom-columns=NODE:.spec.nodeName pod/rook-ceph-osd-0-544db49d7f-qrgqm",
"NODE compute-1",
"oc debug node/ <node-name>",
"chroot /host",
"lsblk",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1",
"oc get csr",
"oc adm certificate approve <Certificate_Name>",
"oc label node <new_node_name> cluster.ocs.openshift.io/openshift-storage=\"\"",
"oc get nodes --show-labels | grep cluster.ocs.openshift.io/openshift-storage= |cut -d' ' -f1"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/scaling_storage/scaling_storage_of_bare_metal_openshift_data_foundation_cluster |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/release_notes_for_the_red_hat_build_of_cryostat_3.0/making-open-source-more-inclusive |
5.200. mod_wsgi | 5.200. mod_wsgi 5.200.1. RHBA-2012:1358 - mod_wsgi bug fix and enhancement update Updated mod_wsgi packages that fix one bug and add one enhancement are now available for Red Hat Enterprise Linux 6. The mod_wsgi packages provide a Apache httpd module, which implements a WSGI compliant interface for hosting Python based web applications. Bug Fix BZ# 670577 Prior to this update, a misleading warning message from the mod_wsgi utilities was logged during startup of the Apache httpd daemon. This update removes this message from the mod_wsgi module. Enhancement BZ# 719409 With this update, access to the SSL connection state is now available in WSGI scripts using the methods "mod_ssl.is_https" and "mod_ssl.var_lookup". All users of mod_wsgi are advised to upgrade to these updated packages, which fix this bug and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/mod_wsgi |
Index | Index A Active Directory schema differences between Identity Management, User Schema Differences between Identity Management and Active Directory attributes setting multi-valued, From the Command Line B bind DNS and LDAP, About DNS in IdM C certificates automatically renewed, Renewing CA Certificates Issued by External CAs , Renewing CA Certificates Issued by the IdM CA CA expiration, Renewing CA Certificates Issued by External CAs , Renewing CA Certificates Issued by the IdM CA updating CA, Renewing CA Certificates Issued by External CAs , Renewing CA Certificates Issued by the IdM CA chkconfig, Starting and Stopping the IdM Domain client troubleshooting installation, Client Installations uninstalling, Uninstalling an IdM Client D DHCP, Adding Host Entries from the Command Line DNS adding zone records, Adding Records to DNS Zones adding zones, Adding Forward DNS Zones bind-dyndb-ldap and Directory Server, About DNS in IdM disabling zones, Enabling and Disabling Zones dynamic updates, Enabling Dynamic DNS Updates hosts with DHCP, Adding Host Entries from the Command Line PTR synchronization requirements, Synchronizing Forward and Reverse Zone Entries DNS zone records, Adding Records to DNS Zones deleting, Deleting Records from DNS Zones format for adding, About the Commands to Add DNS Records IPv4 example, Examples of Adding DNS Resource Records IPv6 example, Examples of Adding DNS Resource Records PTR example, Examples of Adding DNS Resource Records SRV example, Examples of Adding DNS Resource Records types of records, Adding Records to DNS Zones G glue entries, Solving Orphan Entry Conflicts H hosts creating with DHCP, Adding Host Entries from the Command Line disabling, Disabling and Re-enabling Host Entries I installing clients disabling OpenSSH, About ipa-client-install and OpenSSH K Kerberos, About Kerberos separate credentials cache, Caching User Kerberos Tickets SSSD password cache, Caching Kerberos Passwords ticket policies, Setting Kerberos Ticket Policies global, Setting Global Ticket Policies user-level, Setting User-Level Ticket Policies L log rotation policies, IdM Domain Services and Log Rotation logging in SELinux problems, SELinux Login Problems separate credentials cache, Caching User Kerberos Tickets logrotate, IdM Domain Services and Log Rotation N naming conflicts in replication, Solving Naming Conflicts P password expiration, Managing Password Expiration Limits password policies expiration, Managing Password Expiration Limits policies log rotation, IdM Domain Services and Log Rotation port forwarding for the UI, Using the UI with Proxy Servers proxy servers for the UI, Using the UI with Proxy Servers PTR synchronization requirements, Synchronizing Forward and Reverse Zone Entries R reboot, Starting and Stopping the IdM Domain replicas number in replication, About IdM Servers and Replicas replication size limits, About IdM Servers and Replicas S schema differences between Identity Management and Active Directory, User Schema Differences between Identity Management and Active Directory cn, Values for cn Attributes initials, Constraints on the initials Attribute sn, Requiring the surname (sn) Attribute street and streetAddress, Values for street and streetAddress SELinux login problems, SELinux Login Problems servers number in replication, About IdM Servers and Replicas services disabling, Disabling and Re-enabling Service Entries SSH disabling at client install, About ipa-client-install and OpenSSH SSSD and Kerberos passwords, Caching Kerberos Passwords disabling cache, Caching Kerberos Passwords starting with chkconfig, Starting and Stopping the IdM Domain T ticket policies, Setting Kerberos Ticket Policies troubleshooting client installation, Client Installations Kerberos, unknown server error, The client can't resolve reverse hostnames when using an external DNS. resolving hostnames on client, The client can't resolve reverse hostnames when using an external DNS. SELinux, SELinux Login Problems U uninstalling clients, Uninstalling an IdM Client users multi-valued attributes, From the Command Line password expiration, Managing Password Expiration Limits separate credentials cache, Caching User Kerberos Tickets W web UI port forwarding, Using the UI with Proxy Servers proxy servers, Using the UI with Proxy Servers Z zone records, Adding Records to DNS Zones deleting, Deleting Records from DNS Zones format for adding, About the Commands to Add DNS Records IPv4 example, Examples of Adding DNS Resource Records IPv6 example, Examples of Adding DNS Resource Records PTR example, Examples of Adding DNS Resource Records SRV example, Examples of Adding DNS Resource Records types, Adding Records to DNS Zones | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/ix01 |
Using Red Hat Discovery | Using Red Hat Discovery Subscription Central 1-latest Understanding Red Hat Discovery Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/using_red_hat_discovery/index |
function::ulong_arg | function::ulong_arg Name function::ulong_arg - Return function argument as unsigned long Synopsis Arguments n index of argument to return Description Return the value of argument n as an unsigned long. On architectures where a long is 32 bits, the value is zero-extended to 64 bits. | [
"ulong_arg:long(n:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ulong-arg |
Appendix C. Using AMQ Broker with the examples | Appendix C. Using AMQ Broker with the examples The AMQ Spring Boot Starter examples require a running message broker with a queue named example . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named example . USD <broker-instance-dir> /bin/artemis queue create --name example --address example --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2020-10-08 11:29:45 UTC | [
"<broker-instance-dir> /bin/artemis run",
"example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live",
"<broker-instance-dir> /bin/artemis queue create --name example --address example --auto-create-address --anycast",
"<broker-instance-dir> /bin/artemis stop"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_spring_boot_starter/using_the_broker_with_the_examples |
Subsets and Splits