title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 17. Accessing odf-console with the ovs-multitenant plugin by manually enabling global pod networking | Chapter 17. Accessing odf-console with the ovs-multitenant plugin by manually enabling global pod networking In OpenShift Container Platform, when ovs-multitenant plugin is used for software-defined networking (SDN), pods from different projects cannot send packets to or receive packets from pods and services of a different project. By default, pods can not communicate between namespaces or projects because a project's pod networking is not global. To access odf-console, the OpenShift console pod in the openshift-console namespace needs to connect with the OpenShift Data Foundation odf-console in the openshift-storage namespace. This is possible only when you manually enable global pod networking. Issue When`ovs-multitenant` plugin is used in the OpenShift Container Platform, the odf-console plugin fails with the following message: Resolution Make the pod networking for the OpenShift Data Foundation project global: | [
"GET request for \"odf-console\" plugin failed: Get \"https://odf-console-service.openshift-storage.svc.cluster.local:9001/locales/en/plugin__odf-console.json\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)",
"oc adm pod-network make-projects-global openshift-storage"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/troubleshooting_openshift_data_foundation/accessing-odf-console-with-ovs-multitenant-plugin-by-manually-enabling-global-pod-networking_rhodf |
Chapter 163. Infinispan Component | Chapter 163. Infinispan Component Available as of Camel version 2.13 This component allows you to interact with Infinispan distributed data grid / cache. Infinispan is an extremely scalable, highly available key/value data store and data grid platform written in Java. From Camel 2.17 onwards Infinispan requires Java 8. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-infinispan</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 163.1. URI format infinispan://cacheName?[options] 163.2. URI Options The producer allows sending messages to a local infinispan cache configured in the registry, or to a remote cache using the HotRod protocol. The consumer allows listening for events from local infinispan cache accessible from the registry. The Infinispan component supports 3 options, which are listed below. Name Description Default Type configuration (common) The default configuration shared among endpoints. InfinispanConfiguration cacheContainer (common) The default cache container. BasicCacheContainer resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Infinispan endpoint is configured using URI syntax: with the following path and query parameters: 163.2.1. Path Parameters (1 parameters): Name Description Default Type cacheName Required The cache to use String 163.2.2. Query Parameters (18 parameters): Name Description Default Type hosts (common) Specifies the host of the cache on Infinispan instance String queryBuilder (common) Specifies the query builder. InfinispanQueryBuilder bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean clusteredListener (consumer) If true, the listener will be installed for the entire cluster false boolean command (consumer) Deprecated The operation to perform. PUT String customListener (consumer) Returns the custom listener in use, if provided InfinispanCustom Listener eventTypes (consumer) Specifies the set of event types to register by the consumer. Multiple event can be separated by comma. The possible event types are: CACHE_ENTRY_ACTIVATED, CACHE_ENTRY_PASSIVATED, CACHE_ENTRY_VISITED, CACHE_ENTRY_LOADED, CACHE_ENTRY_EVICTED, CACHE_ENTRY_CREATED, CACHE_ENTRY_REMOVED, CACHE_ENTRY_MODIFIED, TRANSACTION_COMPLETED, TRANSACTION_REGISTERED, CACHE_ENTRY_INVALIDATED, DATA_REHASHED, TOPOLOGY_CHANGED, PARTITION_STATUS_CHANGED String sync (consumer) If true, the consumer will receive notifications synchronously true boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern operation (producer) The operation to perform. PUT InfinispanOperation cacheContainer (advanced) Specifies the cache Container to connect BasicCacheContainer cacheContainerConfiguration (advanced) The CacheContainer configuration. Uses if the cacheContainer is not defined. Must be the following types: org.infinispan.client.hotrod.configuration.Configuration - for remote cache interaction configuration; org.infinispan.configuration.cache.Configuration - for embedded cache interaction configuration; Object configurationProperties (advanced) Implementation specific properties for the CacheManager Map configurationUri (advanced) An implementation specific URI for the CacheManager String flags (advanced) A comma separated list of Flag to be applied by default on each cache invocation, not applicable to remote caches. String resultHeader (advanced) Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader Object synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 163.3. Spring Boot Auto-Configuration The component supports 21 options, which are listed below. Name Description Default Type camel.component.infinispan.cache-container The default cache container. The option is a org.infinispan.commons.api.BasicCacheContainer type. String camel.component.infinispan.configuration.cache-container Specifies the cache Container to connect BasicCacheContainer camel.component.infinispan.configuration.cache-container-configuration The CacheContainer configuration. Uses if the cacheContainer is not defined. Must be the following types: org.infinispan.client.hotrod.configuration.Configuration - for remote cache interaction configuration; org.infinispan.configuration.cache.Configuration - for embedded cache interaction configuration; Object camel.component.infinispan.configuration.clustered-listener If true, the listener will be installed for the entire cluster false Boolean camel.component.infinispan.configuration.configuration-properties Implementation specific properties for the CacheManager Map camel.component.infinispan.configuration.configuration-uri An implementation specific URI for the CacheManager String camel.component.infinispan.configuration.custom-listener Returns the custom listener in use, if provided InfinispanCustom Listener camel.component.infinispan.configuration.event-types Specifies the set of event types to register by the consumer. Multiple event can be separated by comma. The possible event types are: CACHE_ENTRY_ACTIVATED, CACHE_ENTRY_PASSIVATED, CACHE_ENTRY_VISITED, CACHE_ENTRY_LOADED, CACHE_ENTRY_EVICTED, CACHE_ENTRY_CREATED, CACHE_ENTRY_REMOVED, CACHE_ENTRY_MODIFIED, TRANSACTION_COMPLETED, TRANSACTION_REGISTERED, CACHE_ENTRY_INVALIDATED, DATA_REHASHED, TOPOLOGY_CHANGED, PARTITION_STATUS_CHANGED Set camel.component.infinispan.configuration.flags A comma separated list of Flag to be applied by default on each cache invocation, not applicable to remote caches. Flag[] camel.component.infinispan.configuration.hosts Specifies the host of the cache on Infinispan instance String camel.component.infinispan.configuration.operation The operation to perform. InfinispanOperation camel.component.infinispan.configuration.query-builder Specifies the query builder. InfinispanQueryBuilder camel.component.infinispan.configuration.result-header Store the operation result in a header instead of the message body. By default, resultHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If resultHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. This value can be overridden by an in message header named: CamelInfinispanOperationResultHeader Object camel.component.infinispan.configuration.sync If true, the consumer will receive notifications synchronously true Boolean camel.component.infinispan.customizer.embedded-cache-manager.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.infinispan.customizer.embedded-cache-manager.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.infinispan.customizer.remote-cache-manager.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.infinispan.customizer.remote-cache-manager.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.infinispan.enabled Enable infinispan component true Boolean camel.component.infinispan.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.infinispan.configuration.command The operation to perform. PUT String 163.4. Message Headers Name Default Value Type Context Description CamelInfinispanCacheName null String Shared The cache participating in the operation or event. CamelInfinispanOperation PUT InfinispanOperation Producer The operation to perform. CamelInfinispanMap null Map Producer A Map to use in case of CamelInfinispanOperationPutAll operation CamelInfinispanKey null Object Shared The key to perform the operation to or the key generating the event. CamelInfinispanValue null Object Producer The value to use for the operation. CamelInfinispanEventType null String Consumer The type of the received event. Possible values defined here org.infinispan.notifications.cachelistener.event.Event.Type CamelInfinispanIsPre null Boolean Consumer Infinispan fires two events for each operation: one before and one after the operation. CamelInfinispanLifespanTime null long Producer The Lifespan time of a value inside the cache. Negative values are interpreted as infinity. CamelInfinispanTimeUnit null String Producer The Time Unit of an entry Lifespan Time. CamelInfinispanMaxIdleTime null long Producer The maximum amount of time an entry is allowed to be idle for before it is considered as expired. CamelInfinispanMaxIdleTimeUnit null String Producer The Time Unit of an entry Max Idle Time. CamelInfinispanQueryBuilder null InfinispanQueryBuilder Producer From Camel 2.17: The QueryBuilde to use for QUERY command, if not present the command defaults to InifinispanConfiguration's one CamelInfinispanIgnoreReturnValues null Boolean Producer From Camel 2.17: If this header is set, the return value for cache operation returning something is ignored by the client application CamelInfinispanOperationResultHeader null String Producer From Camel 2.20: Store the operation result in a header instead of the message body 163.5. Examples Retrieve a specific key from the default cache using a custom cache container: from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant("123") .to("infinispan?cacheContainer=#cacheContainer"); Retrieve a specific key from a named cache: from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) .setHeader(InfinispanConstants.KEY).constant("123") .to("infinispan:myCacheName"); Put a value with lifespan from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant("123") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) .to("infinispan:myCacheName"); Retrieve a specific key from the remote cache using a cache container configuration with additional parameters (host, port and protocol version): org.infinispan.client.hotrod.configuration.Configuration cacheContainerConfiguration = new org.infinispan.client.hotrod.configuration.ConfigurationBuilder() .addServer() .host("localhost") .port(9999) .version(org.infinispan.client.hotrod.ProtocolVersion.PROTOCOL_VERSION_25) .build(); ... from("direct:start") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant("123") .to("infinispan?cacheContainerConfiguration=#cacheContainerConfiguration"); 163.6. Using the Infinispan based idempotent repository In this section we will use the Infinispan based idempotent repository. First, we need to create a cacheManager and then configure our org.apache.camel.component.infinispan.processor.idempotent.InfinispanIdempotentRepository: <!-- set up the cache manager --> <bean id="cacheManager" class="org.infinispan.manager.DefaultCacheManager" init-method="start" destroy-method="stop"/> <!-- set up the repository --> <bean id="infinispanRepo" class="org.apache.camel.component.infinispan.processor.idempotent.InfinispanIdempotentRepository" factory-method="infinispanIdempotentRepository"> <argument ref="cacheManager"/> <argument value="idempotent"/> </bean> Then we can create our Infinispan idempotent repository in the spring XML file as well: <camelContext xmlns="http://camel.apache.org/schema/spring"> <route id="JpaMessageIdRepositoryTest"> <from uri="direct:start" /> <idempotentConsumer messageIdRepositoryRef="infinispanStore"> <header>messageId</header> <to uri="mock:result" /> </idempotentConsumer> </route> </camelContext> 163.7. Using the Infinispan based route policy 163.8. See Also Configuring Camel Component Endpoint Getting Started | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-infinispan</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"infinispan://cacheName?[options]",
"infinispan:cacheName",
"from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant(\"123\") .to(\"infinispan?cacheContainer=#cacheContainer\");",
"from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.PUT) .setHeader(InfinispanConstants.KEY).constant(\"123\") .to(\"infinispan:myCacheName\");",
"from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant(\"123\") .setHeader(InfinispanConstants.LIFESPAN_TIME).constant(100L) .setHeader(InfinispanConstants.LIFESPAN_TIME_UNIT.constant(TimeUnit.MILLISECONDS.toString()) .to(\"infinispan:myCacheName\");",
"org.infinispan.client.hotrod.configuration.Configuration cacheContainerConfiguration = new org.infinispan.client.hotrod.configuration.ConfigurationBuilder() .addServer() .host(\"localhost\") .port(9999) .version(org.infinispan.client.hotrod.ProtocolVersion.PROTOCOL_VERSION_25) .build(); from(\"direct:start\") .setHeader(InfinispanConstants.OPERATION).constant(InfinispanOperation.GET) .setHeader(InfinispanConstants.KEY).constant(\"123\") .to(\"infinispan?cacheContainerConfiguration=#cacheContainerConfiguration\");",
"org.apache.camel.component.infinispan.processor.idempotent.InfinispanIdempotentRepository:",
"<!-- set up the cache manager --> <bean id=\"cacheManager\" class=\"org.infinispan.manager.DefaultCacheManager\" init-method=\"start\" destroy-method=\"stop\"/> <!-- set up the repository --> <bean id=\"infinispanRepo\" class=\"org.apache.camel.component.infinispan.processor.idempotent.InfinispanIdempotentRepository\" factory-method=\"infinispanIdempotentRepository\"> <argument ref=\"cacheManager\"/> <argument value=\"idempotent\"/> </bean>",
"<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"JpaMessageIdRepositoryTest\"> <from uri=\"direct:start\" /> <idempotentConsumer messageIdRepositoryRef=\"infinispanStore\"> <header>messageId</header> <to uri=\"mock:result\" /> </idempotentConsumer> </route> </camelContext>"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/infinispan-component |
Appendix B. Cluster Creation in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 | Appendix B. Cluster Creation in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7 Configuring a Red Hat High Availability Cluster in Red Hat Enterprise Linux 7 with Pacemaker requires a different set of configuration tools with a different administrative interface than configuring a cluster in Red Hat Enterprise Linux 6 with rgmanager . Section B.1, "Cluster Creation with rgmanager and with Pacemaker" summarizes the configuration differences between the various cluster components. Red Hat Enterprise Linux 6.5 and later releases support cluster configuration with Pacemaker, using the pcs configuration tool. Section B.2, "Pacemaker Installation in Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7" summarizes the Pacemaker installation differences between Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7. B.1. Cluster Creation with rgmanager and with Pacemaker Table B.1, "Comparison of Cluster Configuration with rgmanager and with Pacemaker" provides a comparative summary of how you configure the components of a cluster with rgmanager in Red Hat Enterprise Linux 6 and with Pacemaker in Red Hat Enterprise Linux 7. Table B.1. Comparison of Cluster Configuration with rgmanager and with Pacemaker Configuration Component rgmanager Pacemaker Cluster configuration file The cluster configuration file on each node is cluster.conf file, which can can be edited directly. Otherwise, use the luci or ccs interface to define the cluster configuration. The cluster and Pacemaker configuration files are corosync.conf and cib.xml . Do not edit the cib.xml file directly; use the pcs or pcsd interface instead. Network setup Configure IP addresses and SSH before configuring the cluster. Configure IP addresses and SSH before configuring the cluster. Cluster Configuration Tools luci , ccs command, manual editing of cluster.conf file. pcs or pcsd . Installation Install rgmanager (which pulls in all dependencies, including ricci , luci , and the resource and fencing agents). If needed, install lvm2-cluster and gfs2-utils . Install pcs , and the fencing agents you require. If needed, install lvm2-cluster and gfs2-utils . Starting cluster services Start and enable cluster services with the following procedure: Start rgmanager , cman , and, if needed, clvmd and gfs2 . Start ricci , and start luci if using the luci interface. Run chkconfig on for the needed services so that they start at each runtime. Alternately, you can enter ccs --start to start and enable the cluster services. Start and enable cluster services with the following procedure: On every node, execute systemctl start pcsd.service , then systemctl enable pcsd.service to enable pcsd to start at runtime. On one node in the cluster, enter pcs cluster start --all to start corosync and pacemaker . Controlling access to configuration tools For luci , the root user or a user with luci permissions can access luci . All access requires the ricci password for the node. The pcsd gui requires that you authenticate as user hacluster , which is the common system user. The root user can set the password for hacluster . Cluster creation Name the cluster and define which nodes to include in the cluster with luci or ccs , or directly edit the cluster.conf file. Name the cluster and include nodes with pcs cluster setup command or with the pcsd Web UI. You can add nodes to an existing cluster with the pcs cluster node add command or with the pcsd Web UI. Propagating cluster configuration to all nodes When configuration a cluster with luci , propagation is automatic. With ccs , use the --sync option. You can also use the cman_tool version -r command. Propagation of the cluster and Pacemaker configuration files, corosync.conf and cib.xml , is automatic on cluster setup or when adding a node or resource. Global cluster properties The following feature are supported with rgmanager in Red Hat Enterprise Linux 6: * You can configure the system so that the system chooses which multicast address to use for IP multicasting in the cluster network. * If IP multicasting is not available, you can use UDP Unicast transport mechanism. * You can configure a cluster to use RRP protocol. Pacemaker in Red Hat Enterprise Linux 7 supports the following features for a cluster: * You can set no-quorum-policy for the cluster to specify what the system should do when the cluster does not have quorum. * For additional cluster properties you can set, see Table 12.1, "Cluster Properties" . Logging You can set global and daemon-specific logging configuration. See the file /etc/sysconfig/pacemaker for information on how to configure logging manually. Validating the cluster Cluster validation is automatic with luci and with ccs , using the cluster schema. The cluster is automatically validated on startup. The cluster is automatically validated on startup, or you can validate the cluster with pcs cluster verify . Quorum in two-node clusters With a two-node cluster, you can configure how the system determines quorum: * Configure a quorum disk * Use ccs or edit the cluster.conf file to set two_node=1 and expected_votes=1 to allow a single node to maintain quorum. pcs automatically adds the necessary options for a two-node cluster to corosync . Cluster status On luci , the current status of the cluster is visible in the various components of the interface, which can be refreshed. You can use the --getconf option of the ccs command to see current the configuration file. You can use the clustat command to display cluster status. You can display the current cluster status with the pcs status command. Resources You add resources of defined types and configure resource-specific properties with luci or the ccs command, or by editing the cluster.conf configuration file. You add resources of defined types and configure resource-specific properties with the pcs resource create command or with the pcsd Web UI. For general information on configuring cluster resources with Pacemaker see Chapter 6, Configuring Cluster Resources . Resource behavior, grouping, and start/stop order Define cluster services to configure how resources interact. With Pacemaker, you use resource groups as a shorthand method of defining a set of resources that need to be located together and started and stopped sequentially. In addition, you define how resources behave and interact in the following ways: * You set some aspects of resource behavior as resource options. * You use location constraints to determine which nodes a resource can run on. * You use order constraints to determine the order in which resources run. * You use colocation constraints to determine that the location of one resource depends on the location of another resource. For more complete information on these topics, see Chapter 6, Configuring Cluster Resources and Chapter 7, Resource Constraints . Resource administration: Moving, starting, stopping resources With luci , you can manage clusters, individual cluster nodes, and cluster services. With the ccs command, you can manage cluster. You can use the clusvadm to manage cluster services. You can temporarily disable a node so that it cannot host resources with the pcs cluster standby command, which causes the resources to migrate. You can stop a resource with the pcs resource disable command. Removing a cluster configuration completely With luci , you can select all nodes in a cluster for deletion to delete a cluster entirely. You can also remove the cluster.conf from each node in the cluster. You can remove a cluster configuration with the pcs cluster destroy command. Resources active on multiple nodes, resources active on multiple nodes in multiple modes No equivalent. With Pacemaker, you can clone resources so that they can run in multiple nodes, and you can define cloned resources as master and slave resources so that they can run in multiple modes. For information on cloned resources and master/slave resources, see Chapter 9, Advanced Configuration . Fencing -- single fence device per node Create fencing devices globally or locally and add them to nodes. You can define post-fail delay and post-join delay values for the cluster as a whole. Create a fencing device for each node with the pcs stonith create command or with the pcsd Web UI. For devices that can fence multiple nodes, you need to define them only once rather than separately for each node. You can also define pcmk_host_map to configure fencing devices for all nodes with a single command; for information on pcmk_host_map see Table 5.1, "General Properties of Fencing Devices" . You can define the stonith-timeout value for the cluster as a whole. Multiple (backup) fencing devices per node Define backup devices with luci or the ccs command, or by editing the cluster.conf file directly. Configure fencing levels. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ap-ha-rhel6-rhel7-HAAR |
4.8. Configuring Satellite errata viewing for a virtual machine | 4.8. Configuring Satellite errata viewing for a virtual machine In the Administration Portal, you can configure a virtual machine to display the available errata. The virtual machine needs to be associated with a Red Hat Satellite server to show available errata. Red Hat Virtualization 4.4 supports viewing errata with Red Hat Satellite 6.6. Prerequisites The Satellite server must be added as an external provider. The Manager and any virtual machines on which you want to view errata must all be registered in the Satellite server by their respective FQDNs. This ensures that external content host IDs do not need to be maintained in Red Hat Virtualization. Important Virtual machines added using an IP address cannot report errata. The host that the virtual machine runs on also needs to be configured to receive errata information from Satellite. The virtual machine must have the ovirt-guest-agent package installed. This package enables the virtual machine to report its host name to the Red Hat Virtualization Manager, which enables the Red Hat Satellite server to identify the virtual machine as a content host and report the applicable errata. The virtual machine must be registered to the Red Hat Satellite server as a content host. Use Red Hat Satellite remote execution to manage packages on hosts. Note The Katello agent is deprecated and will be removed in a future Satellite version. Migrate your processes to use the remote execution feature to update clients remotely. Procedure Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Foreman/Satellite tab. Select the required Satellite server from the Provider drop-down list. Click OK . Additional resources Setting up Satellite errata viewing for a host in the Administration Guide Installing the Guest Agents, Tools, and Drivers on Linux in the Virtual Machine Management Guide for Red Hat Enterprise Linux virtual machines. Installing the Guest Agents, Tools, and Drivers on Windows in the Virtual Machine Management Guide for Windows virtual machines. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/Configuring_Satellite_Errata |
Chapter 2. Starting Fuse on JBoss EAP server | Chapter 2. Starting Fuse on JBoss EAP server Fuse on JBoss EAP supports both the standalone mode and domain mode. This chapter explains how to start the server in either standalone mode or domain mode. 2.1. Starting JBoss EAP in standalone mode The commands in this section Fuse explain how to start JBoss EAP as a standalone server. Prerequisites JBoss EAP 7.4.16 is installed. Procedure For Red Hat Enterprise Linux , run the following command: EAP_HOME/bin/standalone.sh For Microsoft Windows Server , run the following command: EAP_HOME\bin\standalone.bat 2.2. Starting JBoss EAP in domain mode The commands in this section explain how to start Fuse on JBoss EAP as a domain server. Prerequisites JBoss EAP 7.4.16 is installed. Procedure For Red Hat Enterprise Linux , run the following command: EAP_HOME/bin/domain.sh For Microsoft Windows Server , run the following command: EAP_HOME\bin\domain.bat Additional resources To obtain a list of parameters that you can pass to the start-up scripts, enter the -h parameter. For more information about starting and stopping JBoss Enterprise Application Platform using alternative and more advanced methods, see the Red Hat JBoss Enterprise Application Platform Configuration Guide . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/installing_on_jboss_eap/start-eap-server |
Chapter 6. Kafka breaking changes | Chapter 6. Kafka breaking changes This section describes any changes to Kafka that required a corresponding change to Streams for Apache Kafka to continue to work. 6.1. Using Kafka's example file connectors Kafka no longer includes the example file connectors FileStreamSourceConnector and FileStreamSinkConnector in its CLASSPATH and plugin.path by default. Streams for Apache Kafka has been updated so that you can still use these example connectors. The examples now have to be added to the plugin path like any connector. Streams for Apache Kafka provides an example connector configuration file with the configuration required to deploy the file connectors as KafkaConnector resources: examples/connect/source-connector.yaml See Deploying example KafkaConnector resources and Extending Kafka Connect with connector plugins . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/release_notes_for_streams_for_apache_kafka_2.7_on_openshift/kafka-change-str |
Chapter 3. Accessing ActiveMQ using Skupper | Chapter 3. Accessing ActiveMQ using Skupper Use public cloud resources to process data from a private message broker This example is part of a suite of examples showing the different ways you can use Skupper to connect services across cloud providers, data centers, and edge sites. Overview This example is a simple messaging application that shows how you can use Skupper to access an ActiveMQ broker at a remote site without exposing it to the public internet. It contains two services: An ActiveMQ broker running in a private data center. The broker has a queue named "notifications". An AMQP client running in the public cloud. It sends 10 messages to "notifications" and then receives them back. For the broker, this example uses the Apache ActiveMQ Artemis image from ArtemisCloud.io . The client is a simple Quarkus application. The example uses two Kubernetes namespaces, "private" and "public", to represent the private data center and public cloud. Prerequisites The kubectl command-line tool, version 1.15 or later ( installation guide ) Access to at least one Kubernetes cluster, from any provider you choose Procedure Clone the repo for this example. Install the Skupper command-line tool Set up your namespaces Deploy the message broker Create your sites Link your sites Expose the message broker Run the client Clone the repo for this example. Navigate to the appropriate GitHub repository from https://skupper.io/examples/index.html and clone the repository. Install the Skupper command-line tool This example uses the Skupper command-line tool to deploy Skupper. You need to install the skupper command only once for each development environment. See the Installation for details about installing the CLI. For configured systems, use the following command: Set up your namespaces Skupper is designed for use with multiple Kubernetes namespaces, usually on different clusters. The skupper and kubectl commands use your kubeconfig and current context to select the namespace where they operate. Your kubeconfig is stored in a file in your home directory. The skupper and kubectl commands use the KUBECONFIG environment variable to locate it. A single kubeconfig supports only one active context per user. Since you will be using multiple contexts at once in this exercise, you need to create distinct kubeconfigs. For each namespace, open a new terminal window. In each terminal, set the KUBECONFIG environment variable to a different path and log in to your cluster. Then create the namespace you wish to use and set the namespace on your current context. Note The login procedure varies by provider. See the documentation for yours: Amazon Elastic Kubernetes Service (EKS) Azure Kubernetes Service (AKS) Google Kubernetes Engine (GKE) IBM Kubernetes Service OpenShift Public: Private: Deploy the message broker In Private, use the kubectl apply command to install the broker. Private: Sample output: Create your sites A Skupper site is a location where components of your application are running. Sites are linked together to form a network for your application. In Kubernetes, a site is associated with a namespace. For each namespace, use skupper init to create a site. This deploys the Skupper router and controller. Then use skupper status to see the outcome. Public: Sample output: Private: Sample output: As you move through the steps below, you can use skupper status at any time to check your progress. Link your sites A Skupper link is a channel for communication between two sites. Links serve as a transport for application connections and requests. Creating a link requires use of two skupper commands in conjunction, skupper token create and skupper link create . The skupper token create command generates a secret token that signifies permission to create a link. The token also carries the link details. Then, in a remote site, The skupper link create command uses the token to create a link to the site that generated it. Note The link token is truly a secret. Anyone who has the token can link to your site. Make sure that only those you trust have access to it. First, use skupper token create in site Public to generate the token. Then, use skupper link create in site Private to link the sites. Public: Sample output: Private: Sample output: If your terminal sessions are on different machines, you may need to use scp or a similar tool to transfer the token securely. By default, tokens expire after a single use or 15 minutes after creation. Expose the message broker In Private, use skupper expose to expose the broker on the Skupper network. Then, in Public, use kubectl get service/broker to check that the service appears after a moment. Private: Sample output: Public: Sample output: Run the client In Public, use kubectl run to run the client. Public: Sample output: | [
"sudo dnf install skupper-cli",
"export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public",
"export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private",
"apply -f server",
"kubectl apply -f server deployment.apps/broker created",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"public\". It is not connected to any other sites. It has no exposed services.",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"private\". It is not connected to any other sites. It has no exposed services.",
"skupper token create ~/secret.token",
"skupper token create ~/secret.token Token written to ~/secret.token",
"skupper link create ~/secret.token",
"skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.",
"skupper expose deployment/broker --port 5672",
"skupper expose deployment/broker --port 5672 deployment broker exposed as broker",
"get service/broker",
"kubectl get service/broker NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE broker ClusterIP 10.100.58.95 <none> 5672/TCP 2s",
"run client --attach --rm --restart Never --image quay.io/skupper/activemq-example-client --env SERVER=broker",
"kubectl run client --attach --rm --restart Never --image quay.io/skupper/activemq-example-client --env SERVER=broker ____ __ _____ ___ __ ____ ____ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / / -/ /_/ / /_/ / __ |/ , / ,< / // /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/_/ 2022-05-27 11:19:07,149 INFO [io.sma.rea.mes.amqp] (main) SRMSG16201: AMQP broker configured to broker:5672 for channel incoming-messages 2022-05-27 11:19:07,170 INFO [io.sma.rea.mes.amqp] (main) SRMSG16201: AMQP broker configured to broker:5672 for channel outgoing-messages 2022-05-27 11:19:07,198 INFO [io.sma.rea.mes.amqp] (main) SRMSG16212: Establishing connection with AMQP broker 2022-05-27 11:19:07,212 INFO [io.sma.rea.mes.amqp] (main) SRMSG16212: Establishing connection with AMQP broker 2022-05-27 11:19:07,215 INFO [io.quarkus] (main) client 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.9.2.Final) started in 0.397s. 2022-05-27 11:19:07,215 INFO [io.quarkus] (main) Profile prod activated. 2022-05-27 11:19:07,215 INFO [io.quarkus] (main) Installed features: [cdi, smallrye-context-propagation, smallrye-reactive-messaging, smallrye-reactive-messaging-amqp, vertx] Sent message 1 Sent message 2 Sent message 3 Sent message 4 Sent message 5 Sent message 6 Sent message 7 Sent message 8 Sent message 9 Sent message 10 2022-05-27 11:19:07,434 INFO [io.sma.rea.mes.amqp] (vert.x-eventloop-thread-0) SRMSG16213: Connection with AMQP broker established 2022-05-27 11:19:07,442 INFO [io.sma.rea.mes.amqp] (vert.x-eventloop-thread-0) SRMSG16213: Connection with AMQP broker established 2022-05-27 11:19:07,468 INFO [io.sma.rea.mes.amqp] (vert.x-eventloop-thread-0) SRMSG16203: AMQP Receiver listening address notifications Received message 1 Received message 2 Received message 3 Received message 4 Received message 5 Received message 6 Received message 7 Received message 8 Received message 9 Received message 10 Result: OK"
] | https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/examples/accessing_activemq_using_skupper |
Chapter 4. Publishing an automation execution environment | Chapter 4. Publishing an automation execution environment 4.1. Customizing an existing automation execution environments image Ansible Controller includes the following default execution environments: Minimal - Includes the latest Ansible-core 2.15 release along with Ansible Runner, but does not include collections or other content EE Supported - Minimal, plus all Red Hat-supported collections and dependencies While these environments cover many automation use cases, you can add additional items to customize these containers for your specific needs. The following procedure adds the kubernetes.core collection to the ee-minimal default image: Procedure Log in to registry.redhat.io via Podman: USD podman login -u="[username]" -p="[token/hash]" registry.redhat.io Ensure that you can pull the required automation execution environment base image: podman pull registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8:latest Configure your Ansible Builder files to specify the required base image and any additional content to add to the new execution environment image. For example, to add the Kubernetes Core Collection from Galaxy to the image, use the Galaxy entry: collections: - kubernetes.core For more information about definition files and their content, see the definition file breakdown section . In the execution environment definition file, specify the original ee-minimal container's URL and tag in the EE_BASE_IMAGE field. In doing so, your final execution-environment.yml file will look like the following: Example 4.1. A customized execution-environment.yml file version: 3 images: base_image: 'registry.redhat.io/ansible-automation-platform-25/ee-minimal-rhel9:latest' dependencies: galaxy: collections: - kubernetes.core Note Since this example uses the community version of kubernetes.core and not a certified collection from automation hub, we do not need to create an ansible.cfg file or reference that in our definition file. Build the new execution environment image by using the following command: USD ansible-builder build -t [username] / new-ee where [username] specifies your username, and new-ee specifies the name of your new container image. Note If you do not use -t with build , an image called ansible-execution-env is created and loaded into the local container registry. Use the podman images command to confirm that your new container image is in that list: Example 4.2. Output of a podman images command with the image new-ee REPOSITORY TAG IMAGE ID CREATED SIZE localhost/new-ee latest f5509587efbb 3 minutes ago 769 MB Verify that the collection is installed: USD podman run [username]/new-ee ansible-doc -l kubernetes.core Tag the image for use in your automation hub: USD podman tag [username]/new-ee [automation-hub-IP-address]/[username]/new-ee Log in to your automation hub using Podman: Note You must have admin or appropriate container repository permissions for automation hub to push a container. For more information, see Manage containers in private automation hub . USD podman login -u="[username]" -p="[token/hash]" [automation-hub-IP-address] Push your image to the container registry in automation hub: USD podman push [automation-hub-IP-address]/[username]/new-ee Pull your new image into your automation controller instance: Go to automation controller. From the navigation panel, select Automation Execution Infrastructure Execution Environments . Click Add . Enter the appropriate information then click Save to pull in the new image. Note If your instance of automation hub is password or token protected, ensure that you have the appropriate container registry credential set up. 4.2. Additional resources (or steps) For more details on customizing execution environments based on common scenarios, see the following topics in the Ansible Builder Documentation : Copying arbitratory files to an execution environment Building execution environments with environment variables Building execution environments with environment variables and ansible.cfg | [
"podman login -u=\"[username]\" -p=\"[token/hash]\" registry.redhat.io",
"pull registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8:latest",
"collections: - kubernetes.core",
"version: 3 images: base_image: 'registry.redhat.io/ansible-automation-platform-25/ee-minimal-rhel9:latest' dependencies: galaxy: collections: - kubernetes.core",
"ansible-builder build -t [username] / new-ee",
"REPOSITORY TAG IMAGE ID CREATED SIZE localhost/new-ee latest f5509587efbb 3 minutes ago 769 MB",
"podman run [username]/new-ee ansible-doc -l kubernetes.core",
"podman tag [username]/new-ee [automation-hub-IP-address]/[username]/new-ee",
"podman login -u=\"[username]\" -p=\"[token/hash]\" [automation-hub-IP-address]",
"podman push [automation-hub-IP-address]/[username]/new-ee"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/creating_and_using_execution_environments/assembly-publishing-exec-env |
Chapter 5. Preparing for data loss with IdM backups | Chapter 5. Preparing for data loss with IdM backups IdM provides the ipa-backup utility to backup IdM data, and the ipa-restore utility to restore servers and data from those backups. Note Red Hat recommends running backups as often as necessary on a hidden replica with all server roles installed, especially the Certificate Authority (CA) role if the environment uses the integrated IdM CA. See Installing an IdM hidden replica . 5.1. IdM backup types With the ipa-backup utility, you can create two types of backups: Full-server backup Contains all server configuration files related to IdM, and LDAP data in LDAP Data Interchange Format (LDIF) files IdM services must be offline . Suitable for rebuilding an IdM deployment from scratch. Data-only backup Contains LDAP data in LDIF files and the replication changelog IdM services can be online or offline . Suitable for restoring IdM data to a state in the past 5.2. Naming conventions for IdM backup files By default, IdM stores backups as .tar archives in subdirectories of the /var/lib/ipa/backup/ directory. The archives and subdirectories follow these naming conventions: Full-server backup An archive named ipa-full.tar in a directory named ipa-full- <YEAR-MM-DD-HH-MM-SS> , with the time specified in GMT time. Data-only backup An archive named ipa-data.tar in a directory named ipa-data- <YEAR-MM-DD-HH-MM-SS> , with the time specified in GMT time. Note Uninstalling an IdM server does not automatically remove any backup files. 5.3. Considerations when creating a backup The important behaviors and limitations of the ipa-backup command include the following: By default, the ipa-backup utility runs in offline mode, which stops all IdM services. The utility automatically restarts IdM services after the backup is finished. A full-server backup must always run with IdM services offline, but a data-only backup can be performed with services online. By default, the ipa-backup utility creates backups on the file system containing the /var/lib/ipa/backup/ directory. Red Hat recommends creating backups regularly on a file system separate from the production filesystem used by IdM, and archiving the backups to a fixed medium, such as tape or optical storage. Consider performing backups on hidden replicas . IdM services can be shut down on hidden replicas without affecting IdM clients. Starting with RHEL 8.3.0, the ipa-backup utility checks if all of the services used in your IdM cluster, such as a Certificate Authority (CA), Domain Name System (DNS), and Key Recovery Agent (KRA), are installed on the server where you are running the backup. If the server does not have all these services installed, the ipa-backup utility exits with a warning, because backups taken on that host would not be sufficient for a full cluster restoration. For example, if your IdM deployment uses an integrated Certificate Authority (CA), a backup run on a non-CA replica will not capture CA data. Red Hat recommends verifying that the replica where you perform an ipa-backup has all of the IdM services used in the cluster installed. You can bypass the IdM server role check with the ipa-backup --disable-role-check command, but the resulting backup will not contain all the data necessary to restore IdM fully. 5.4. Creating an IdM backup Create a full-server and data-only backup in offline and online modes using the ipa-backup command. Prerequisites You must have root privileges to run the ipa-backup utility. Procedure To create a full-server backup in offline mode, use the ipa-backup utility without additional options. To create an offline data-only backup, specify the --data option. To create a full-server backup that includes IdM log files, use the --logs option. To create a data-only backup while IdM services are running, specify both --data and --online options. Note If the backup fails due to insufficient space in the /tmp directory, use the TMPDIR environment variable to change the destination for temporary files created by the backup process: Verification Ensure the backup directory contains an archive with the backup. Additional resources ipa-backup command fails to finish (Red Hat Knowledgebase) 5.5. Creating a GPG2-encrypted IdM backup You can create encrypted backups using GNU Privacy Guard (GPG) encryption. The following procedure creates an IdM backup and encrypts it using a GPG2 key. Prerequisites You have created a GPG2 key. See Creating a GPG2 key . Procedure Create a GPG-encrypted backup by specifying the --gpg option. Verification Ensure that the backup directory contains an encrypted archive with a .gpg file extension. Additional resources Creating a backup . 5.6. Creating a GPG2 key The following procedure describes how to generate a GPG2 key to use with encryption utilities. Prerequisites You need root privileges. Procedure Install and configure the pinentry utility. Create a key-input file used for generating a GPG keypair with your preferred details. For example: Optional: By default, GPG2 stores its keyring in the ~/.gnupg file. To use a custom keyring location, set the GNUPGHOME environment variable to a directory that is only accessible by root. Generate a new GPG2 key based on the contents of the key-input file. Enter a passphrase to protect the GPG2 key. You use this passphrase to access the private key for decryption. Confirm the correct passphrase by entering it again. Verify that the new GPG2 key was created successfully. Verification List the GPG keys on the server. Additional resources GNU Privacy Guard | [
"ll /var/lib/ipa/backup/ ipa-full -2021-01-29-12-11-46 total 3056 -rw-r--r--. 1 root root 158 Jan 29 12:11 header -rw-r--r--. 1 root root 3121511 Jan 29 12:11 ipa-full.tar",
"ll /var/lib/ipa/backup/ ipa-data -2021-01-29-12-14-23 total 1072 -rw-r--r--. 1 root root 158 Jan 29 12:14 header -rw-r--r--. 1 root root 1090388 Jan 29 12:14 ipa-data.tar",
"ipa-backup Preparing backup on server.example.com Stopping IPA services Backing up ipaca in EXAMPLE-COM to LDIF Backing up userRoot in EXAMPLE-COM to LDIF Backing up EXAMPLE-COM Backing up files Starting IPA service Backed up to /var/lib/ipa/backup/ipa-full-2020-01-14-11-26-06 The ipa-backup command was successful",
"ipa-backup --data",
"ipa-backup --logs",
"ipa-backup --data --online",
"TMPDIR=/new/location ipa-backup",
"ls /var/lib/ipa/backup/ipa-full-2020-01-14-11-26-06 header ipa-full.tar",
"ipa-backup --gpg Preparing backup on server.example.com Stopping IPA services Backing up ipaca in EXAMPLE-COM to LDIF Backing up userRoot in EXAMPLE-COM to LDIF Backing up EXAMPLE-COM Backing up files Starting IPA service Encrypting /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00/ipa-full.tar Backed up to /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00 The ipa-backup command was successful",
"ls /var/lib/ipa/backup/ipa-full-2020-01-13-14-38-00 header ipa-full.tar.gpg",
"yum install pinentry mkdir ~/.gnupg -m 700 echo \"pinentry-program /usr/bin/pinentry-curses\" >> ~/.gnupg/gpg-agent.conf",
"cat >key-input <<EOF %echo Generating a standard key Key-Type: RSA Key-Length: 2048 Name-Real: GPG User Name-Comment: first key Name-Email: [email protected] Expire-Date: 0 %commit %echo Finished creating standard key EOF",
"export GNUPGHOME= /root/backup mkdir -p USDGNUPGHOME -m 700",
"gpg2 --batch --gen-key key-input",
"┌──────────────────────────────────────────────────────┐ │ Please enter the passphrase to │ │ protect your new key │ │ │ │ Passphrase: <passphrase> │ │ │ │ <OK> <Cancel> │ └──────────────────────────────────────────────────────┘",
"┌──────────────────────────────────────────────────────┐ │ Please re-enter this passphrase │ │ │ │ Passphrase: <passphrase> │ │ │ │ <OK> <Cancel> │ └──────────────────────────────────────────────────────┘",
"gpg: keybox '/root/backup/pubring.kbx' created gpg: Generating a standard key gpg: /root/backup/trustdb.gpg: trustdb created gpg: key BF28FFA302EF4557 marked as ultimately trusted gpg: directory '/root/backup/openpgp-revocs.d' created gpg: revocation certificate stored as '/root/backup/openpgp-revocs.d/8F6FCF10C80359D5A05AED67BF28FFA302EF4557.rev' gpg: Finished creating standard key",
"gpg2 --list-secret-keys gpg: checking the trustdb gpg: marginals needed: 3 completes needed: 1 trust model: pgp gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u / root /backup/pubring.kbx ------------------------ sec rsa2048 2020-01-13 [SCEA] 8F6FCF10C80359D5A05AED67BF28FFA302EF4557 uid [ultimate] GPG User (first key) <[email protected]>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/preparing_for_disaster_recovery_with_identity_management/preparing-for-data-loss-with-idm-backups_preparing-for-disaster-recovery |
Chapter 9. Container Images Based on Red Hat Software Collections 3.2 | Chapter 9. Container Images Based on Red Hat Software Collections 3.2 Component Description Supported architectures Application Images rhscl/nodejs-10-rhel7 Node.js 10 platform for building and running applications (EOL) x86_64, s390x, ppc64le rhscl/php-72-rhel7 PHP 7.2 platform for building and running applications (EOL) x86_64, s390x, ppc64le Daemon Images rhscl/nginx-114-rhel7 nginx 1.14 server and a reverse proxy server (EOL) x86_64, s390x, ppc64le Database Images rhscl/mysql-80-rhel7 MySQL 8.0 SQL database server x86_64, s390x, ppc64le Legend: x86_64 - AMD64 and Intel 64 architectures s390x - 64-bit IBM Z ppc64le - IBM POWER, little endian All images are based on components from Red Hat Software Collections. The images are available for Red Hat Enterprise Linux 7 through the Red Hat Container Registry. For detailed information about components provided by Red Hat Software Collections 3.2, see the Red Hat Software Collections 3.2 Release Notes . For more information about the Red Hat Developer Toolset 8.0 components, see the Red Hat Developer Toolset 8 User Guide . For information regarding container images based on Red Hat Software Collections 2, see Using Red Hat Software Collections 2 Container Images . EOL images are no longer supported. | null | https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/using_red_hat_software_collections_container_images/rhscl_3.2_images |
7.6. RHEA-2013:1642 - new packages: redhat-support-lib-python and redhat-support-tool | 7.6. RHEA-2013:1642 - new packages: redhat-support-lib-python and redhat-support-tool New redhat-support-lib-python and redhat-support-tool packages are now available for Red Hat Enterprise Linux 6. The redhat-support-lib-python package provides a Python library that developers can use to easily write software solutions that leverage Red Hat Access subscription services. The redhat-support-tool utility facilitates console-based access to Red Hat's subscriber services and gives Red Hat subscribers more venues for accessing the content and services available to them as Red Hat customers. Further, it enables our customers to integrate and automate their helpdesk services with our subscription services. The capabilities of this package include: * Red Hat Access Knowledge Base article and solution viewing from the console (formatted as man pages). * Viewing, creating, modifying, and commenting on customer support cases from the console. * Attachment uploading directly to a customer support case or to ftp://dropbox.redhat.com/ from the console. * Full proxy support (that is, FTP and HTTP proxies). * Easy listing and downloading of attachments in customer support cases from the console. * Red Hat Access Knowledge Base searching on query terms, log messages, and other parameters, and viewing search results in a selectable list. * Easy uploading of log files, text files, and other sources to the Red Hat Access automatic problem determination engine for diagnosis. * Various other support-related commands. Detailed usage information for the tool can be found in the Red Hat Customer Portal at https://access.redhat.com/site/articles/445443 This enhancement update adds the redhat-support-lib-python and redhat-support-tool packages to Red Hat Enterprise Linux 6. (BZ# 987159 , BZ# 869395 , BZ# 880776 , BZ# 987171 , BZ# 987169 , BZ# 987163 ) All users who require redhat-support-lib-python and redhat-support-tool are advised to install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/rhea-2013-1642 |
7.217. rpmdevtools | 7.217. rpmdevtools 7.217.1. RHBA-2012:1313 - rpmdevtools bug fix update Updated rpmdevtools packages that fix one bug are now available for Red Hat Enterprise Linux 6. The rpmdevtools packages contain scripts and (X)Emacs support files to aid in development of RPM packages. Bug Fix BZ#730770 Prior to this update, the sample spec files referred to a deprecated BuildRoot tag. The tag was ignored if it was defined. This update removes the BuildRoot tags from all sample spec files. All users of rpmdevtools are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/rpmdevtools |
Chapter 14. Uninstalling a cluster on Azure | Chapter 14. Uninstalling a cluster on Azure You can remove a cluster that you deployed to Microsoft Azure. 14.1. Removing a cluster that uses installer-provisioned infrastructure You can remove a cluster that uses installer-provisioned infrastructure from your cloud. Note After uninstallation, check your cloud provider for any resources not removed properly, especially with User Provisioned Infrastructure (UPI) clusters. There might be resources that the installer did not create or that the installer is unable to access. Prerequisites You have a copy of the installation program that you used to deploy the cluster. You have the files that the installation program generated when you created your cluster. Procedure From the directory that contains the installation program on the computer that you used to install the cluster, run the following command: USD ./openshift-install destroy cluster \ --dir <installation_directory> --log-level info 1 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different details, specify warn , debug , or error instead of info . Note You must specify the directory that contains the cluster definition files for your cluster. The installation program requires the metadata.json file in this directory to delete the cluster. Optional: Delete the <installation_directory> directory and the OpenShift Container Platform installation program. 14.2. Deleting Microsoft Azure resources with the Cloud Credential Operator utility After uninstalling an OpenShift Container Platform cluster that uses short-term credentials managed outside the cluster, you can use the CCO utility ( ccoctl ) to remove the Microsoft Azure (Azure) resources that ccoctl created during installation. Prerequisites Extract and prepare the ccoctl binary. Uninstall an OpenShift Container Platform cluster on Azure that uses short-term credentials. Procedure Delete the Azure resources that ccoctl created by running the following command: USD ccoctl azure delete \ --name=<name> \ 1 --region=<azure_region> \ 2 --subscription-id=<azure_subscription_id> \ 3 --delete-oidc-resource-group 1 <name> matches the name that was originally used to create and tag the cloud resources. 2 <azure_region> is the Azure region in which to delete cloud resources. 3 <azure_subscription_id> is the Azure subscription ID for which to delete cloud resources. Verification To verify that the resources are deleted, query Azure. For more information, refer to Azure documentation. | [
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2",
"ccoctl azure delete --name=<name> \\ 1 --region=<azure_region> \\ 2 --subscription-id=<azure_subscription_id> \\ 3 --delete-oidc-resource-group"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_azure/uninstalling-cluster-azure |
8.136. openswan | 8.136. openswan 8.136.1. RHBA-2013:1718 - openswan bug fix and enhancement update Updated openswan packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Openswan is a free implementation of Internet Protocol Security (IPsec) and Internet Key Exchange (IKE). IPsec uses strong cryptography to provide both authentication and encryption services. These services allow you to build secure tunnels through untrusted networks. Bug Fixes BZ# 771612 Previously, the "ipsec barf" command called the grep utility on the /var/log/lastlog file which caused the system to use significant amount of memory. After this update, "ipsec barf" uses the "lastlog -u user" command, which prevents the utility from using too much memory. BZ# 831669 According to the RFC 5996 standard, reserved fields must be ignored on receipt, irrespective of their value. Previously, however, the contents of the reserved fields was not being ignored on receipt for some payloads. Consequently, Openswan reported an error message and Internet Key Exchange (IKE) negotiation failed. With this update, Openswan has been modified to ignore the reserved fields and IKE negotiation succeeds regardless of the reserved field value. BZ# 831676 When a connection was configured in transport mode, Openswan did not pass information about traffic selectors to the NETKEY/XFRM IPsec kernel stack during the setup of security associations (SAs). Consequently, the information was not available in the output of the "ip xfrm state" command. With this update, Openswan correctly passes the traffic selectors information to the kernel when SAs are set up in transport mode. BZ# 846797 When a tunnel was established between two IPsec hosts, for example host1 and host2, utilizing Dead Peer Detection DPD, and if host2 went offline while host1 continued to transmit data, host1 continually queued multiple phase 2 requests after the DPD action. When host2 came back online, the stack of pending phase 2 requests was established, leaving a new IPsec Security Association (SA), and a large group of extra SA's that consumed system resources and eventually expired. This update ensures that Openswan has just a single pending phase 2 request during the time that host2 is down, and when host2 comes back up, only a single new IPsec SA is established, thus preventing this bug. BZ# 848132 When a tunnel was established between two IPsec hosts, for example host1 and host2, using the "dpdaction=restart" option, if host2 went offline and the Dead Peer Detection (DPD) was activated, the new phase1 replacement started retransmitting, but was subject to a limited amount of retries, even if the "keyingtries=%forever" option (which is default) was set. If host2 did not reconnect in time, the phase1 replacement expired and then the tunnel did not rekey until the old phase1 Security Association (SA) expired (in about 10 minutes by default). This meant that using the "dpdaction=restart" option only allowed a short window for the peer to reconnect. With this update, the phase1 replacement continues to try to rekey, thus avoiding the retransmission limit and timeout. BZ# 868986 Previously, certificates specified by names in "rightid" connection options containing a comma, were ignored and these connections were not authenticated due to an ID mismatch. With this update, Openswan now supports escaped commas inside the OID field in the "rightid" option. BZ# 881914 Previously, when certificates signed with the SHA2 digest algorithm were used for peer authentication, connection setup failed with the following error: This bug has been fixed and Openswan now recognizes these certificates and sets up a connection correctly. BZ# 954249 The openswan package for Internet Protocol Security (IPsec) contains two diagnostic commands, "ipsec barf" and "ipsec look", that can cause the iptables kernel modules for NAT and IP connection tracking to be loaded. On very busy systems, loading such kernel modules can result in severely degraded performance or lead to a crash when the kernel runs out of resources. With this update, the diagnostic commands do not cause loading of the NAT and IP connection tracking modules. This update does not affect systems that already use IP connection tracking or NAT as the iptables and ip6tables services will already have loaded these kernel modules. BZ# 958969 Previously, when the IPsec daemon (pluto) attempted to verify the signature of a Certificate Revocation List (CRL), if the signature value began with a zero byte and had another zero as padding, the mpz() functions stripped out all leading zeros. This resulted in the Network Security Services (NSS) data input being one byte short and consequently failing verification when NSS compared its length to the modulus length. This update removes the conversions into arbitrary-precision arithmetic (bignum) objects and handles the leading zero by moving the pointer one position forward and reducing the length of the signature by 1. As a result, verification of CRLs now works as expected even with leading zeros in the signature. BZ# 960171 Previously, the order of the load_crls() and load_authcerts_from_nss() functions in the plutomain.c file was incorrect. As a consequence, when the IPsec daemon (pluto) attempted to load the Certificate Revocation Lists (CRLs) from the /etc/ipsec.d/crls/ directory during startup, loading failed because pluto checked for a loaded Certification Authority (CA) when there was none available. This update swaps the order of the aforementioned functions in the plutomain.c file, and now pluto no longer fails during startup and loads the CRLs successfully. BZ# 965014 Previously, the Openswan Internet Key Exchage version 2 (IKEv2) implementation did not set the "reserved" field to zero. As a consequence, Openswan did not pass the TAHI IKEv2 test. After this update, Openswan now sets the "reserved" field to zero and successfully passes the TAHI IKEv2 test. BZ# 975550 Previously, when an MD5 hash was used in the Internet Key Exchange version 2 (IKEv2) algorithm in Openswan to connect to another IPsec implementation, for example strongswan, occasionally the installed kernel security policy entry had a different "enc" or "auth" value than the corresponding values on the other side. As a consequence, a connection could not be established even though the Security Association (SA) was established correctly. After this update, these values are set correctly in Openswan and a connection can be established successfully. BZ# 985596 Previously, when in FIPS mode, Openswan did not allow the use of SHA2 algorithms. This update enables the use of SHA2 algorithms in FIPS mode. BZ# 994240 Initial support for passing traffic selectors to an XFRM IPsec stack for transport mode was incomplete and did not include the necessary work-arounds for NAT-traversal support. As a consequence, Openswan could not establish an L2TP connection with devices which use NAT-Traversal. After this update, the direction of IPsec Security Association (SA) is now passed to the netlink_setup_sa() function so that the client IP is substituted with the host IP and the selector works for NAT transport mode. BZ# 1002633 After this update, Openswan now uses dracut-fips to determine whether it should run in FIPS mode. Enhancements BZ# 916743 This update introduces a feature to control transmission delay timing for IPsec connections. BZ# 880004 With this update, Openswan now supports Internet Key Exchage (IKE) fragmentation. Openswan can now successfully connect to devices which support IKE fragmentation. BZ# 908476 Support for the Internet Key Exchage version 1 (IKEv1) INITIAL-CONTACT IPsec message, as as defined in Section 4.6.3.3. of the RFC2407 specification, has been added to Openswan. This addresses an interoperability bug where a peer does not replace an existing IPsec Security Association (SA) with a newly negotiated one unless a Notification Payload message is present. BZ# 957400 The kernel module aesni_intel is now loaded by Openswan on startup. This update significantly improves the performance of Openswan on machines running Advanced Encryption Standard New Instructions (AES-NI). BZ# 959568 The default behavior of Openswan is to send NAT-Traversal keepalive packets. Disabling sending keepalive packets previously was a global option. After this update, the user can disable NAT-Traversal keepalive packet sending per connection. Users of openswan are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | [
"digest algorithm not supported"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/openswan |
Chapter 11. Config map reference for the Cluster Monitoring Operator | Chapter 11. Config map reference for the Cluster Monitoring Operator 11.1. Cluster Monitoring Operator configuration reference Parts of Red Hat OpenShift Service on AWS cluster monitoring are configurable. The API is accessible by setting parameters defined in various config maps. To configure monitoring components that monitor user-defined projects, edit the ConfigMap object named user-workload-monitoring-config in the openshift-user-workload-monitoring namespace. These configurations are defined by UserWorkloadConfiguration . The configuration file is always defined under the config.yaml key in the config map data. Important Not all configuration parameters for the monitoring stack are exposed. Only the parameters and fields listed in this reference are supported for configuration. For more information about supported configurations, see Maintenance and support for monitoring . Configuring cluster monitoring is optional. If a configuration does not exist or is empty, default values are used. If the configuration has invalid YAML data, or if it contains unsupported or duplicated fields that bypassed early validation, the Cluster Monitoring Operator stops reconciling the resources and reports the Degraded=True status in the status conditions of the Operator. 11.2. AdditionalAlertmanagerConfig 11.2.1. Description The AdditionalAlertmanagerConfig resource defines settings for how a component communicates with additional Alertmanager instances. 11.2.2. Required apiVersion Appears in: PrometheusK8sConfig , PrometheusRestrictedConfig , ThanosRulerConfig Property Type Description apiVersion string Defines the API version of Alertmanager. Possible values are v1 or v2 . The default is v2 . bearerToken *v1.SecretKeySelector Defines the secret key reference containing the bearer token to use when authenticating to Alertmanager. pathPrefix string Defines the path prefix to add in front of the push endpoint path. scheme string Defines the URL scheme to use when communicating with Alertmanager instances. Possible values are http or https . The default value is http . staticConfigs []string A list of statically configured Alertmanager endpoints in the form of <hosts>:<port> . timeout *string Defines the timeout value used when sending alerts. tlsConfig TLSConfig Defines the TLS settings to use for Alertmanager connections. 11.3. AlertmanagerMainConfig 11.3.1. Description The AlertmanagerMainConfig resource defines settings for the Alertmanager component in the openshift-monitoring namespace. Appears in: ClusterMonitoringConfiguration Property Type Description enabled *bool A Boolean flag that enables or disables the main Alertmanager instance in the openshift-monitoring namespace. The default value is true . enableUserAlertmanagerConfig bool A Boolean flag that enables or disables user-defined namespaces to be selected for AlertmanagerConfig lookups. This setting only applies if the user workload monitoring instance of Alertmanager is not enabled. The default value is false . logLevel string Defines the log level setting for Alertmanager. The possible values are: error , warn , info , debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the Pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Alertmanager container. secrets []string Defines a list of secrets to be mounted into Alertmanager. The secrets must reside within the same namespace as the Alertmanager object. They are added as volumes named secret-<secret-name> and mounted at /etc/alertmanager/secrets/<secret-name> in the alertmanager container of the Alertmanager pods. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Alertmanager. Use this setting to configure the persistent volume claim, including storage class, volume size, and name. 11.4. AlertmanagerUserWorkloadConfig 11.4.1. Description The AlertmanagerUserWorkloadConfig resource defines the settings for the Alertmanager instance used for user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description enabled bool A Boolean flag that enables or disables a dedicated instance of Alertmanager for user-defined alerts in the openshift-user-workload-monitoring namespace. The default value is false . enableAlertmanagerConfig bool A Boolean flag to enable or disable user-defined namespaces to be selected for AlertmanagerConfig lookup. The default value is false . logLevel string Defines the log level setting for Alertmanager for user workload monitoring. The possible values are error , warn , info , and debug . The default value is info . resources *v1.ResourceRequirements Defines resource requests and limits for the Alertmanager container. secrets []string Defines a list of secrets to be mounted into Alertmanager. The secrets must be located within the same namespace as the Alertmanager object. They are added as volumes named secret-<secret-name> and mounted at /etc/alertmanager/secrets/<secret-name> in the alertmanager container of the Alertmanager pods. nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Alertmanager. Use this setting to configure the persistent volume claim, including storage class, volume size and name. 11.5. ClusterMonitoringConfiguration 11.5.1. Description The ClusterMonitoringConfiguration resource defines settings that customize the default platform monitoring stack through the cluster-monitoring-config config map in the openshift-monitoring namespace. Property Type Description alertmanagerMain * AlertmanagerMainConfig AlertmanagerMainConfig defines settings for the Alertmanager component in the openshift-monitoring namespace. enableUserWorkload *bool UserWorkloadEnabled is a Boolean flag that enables monitoring for user-defined projects. userWorkload * UserWorkloadConfig UserWorkload defines settings for the monitoring of user-defined projects. kubeStateMetrics * KubeStateMetricsConfig KubeStateMetricsConfig defines settings for the kube-state-metrics agent. metricsServer * MetricsServerConfig MetricsServer defines settings for the Metrics Server component. prometheusK8s * PrometheusK8sConfig PrometheusK8sConfig defines settings for the Prometheus component. prometheusOperator * PrometheusOperatorConfig PrometheusOperatorConfig defines settings for the Prometheus Operator component. prometheusOperatorAdmissionWebhook * PrometheusOperatorAdmissionWebhookConfig PrometheusOperatorAdmissionWebhookConfig defines settings for the admission webhook component of Prometheus Operator. openshiftStateMetrics * OpenShiftStateMetricsConfig OpenShiftMetricsConfig defines settings for the openshift-state-metrics agent. telemeterClient * TelemeterClientConfig TelemeterClientConfig defines settings for the Telemeter Client component. thanosQuerier * ThanosQuerierConfig ThanosQuerierConfig defines settings for the Thanos Querier component. nodeExporter NodeExporterConfig NodeExporterConfig defines settings for the node-exporter agent. monitoringPlugin * MonitoringPluginConfig MonitoringPluginConfig defines settings for the monitoring console-plugin component. 11.6. KubeStateMetricsConfig 11.6.1. Description The KubeStateMetricsConfig resource defines settings for the kube-state-metrics agent. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the KubeStateMetrics container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. 11.7. MetricsServerConfig 11.7.1. Description The MetricsServerConfig resource defines settings for the Metrics Server component. Appears in: ClusterMonitoringConfiguration Property Type Description audit *Audit Defines the audit configuration used by the Metrics Server instance. Possible profile values are Metadata , Request , RequestResponse , and None . The default value is Metadata . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. tolerations []v1.Toleration Defines tolerations for the pods. resources *v1.ResourceRequirements Defines resource requests and limits for the Metrics Server container. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. 11.8. MonitoringPluginConfig 11.8.1. Description The MonitoringPluginConfig resource defines settings for the web console plugin component in the openshift-monitoring namespace. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the console-plugin container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. 11.9. NodeExporterCollectorBuddyInfoConfig 11.9.1. Description The NodeExporterCollectorBuddyInfoConfig resource works as an on/off switch for the buddyinfo collector of the node-exporter agent. By default, the buddyinfo collector is disabled. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the buddyinfo collector. 11.10. NodeExporterCollectorConfig 11.10.1. Description The NodeExporterCollectorConfig resource defines settings for individual collectors of the node-exporter agent. Appears in: NodeExporterConfig Property Type Description cpufreq NodeExporterCollectorCpufreqConfig Defines the configuration of the cpufreq collector, which collects CPU frequency statistics. Disabled by default. tcpstat NodeExporterCollectorTcpStatConfig Defines the configuration of the tcpstat collector, which collects TCP connection statistics. Disabled by default. netdev NodeExporterCollectorNetDevConfig Defines the configuration of the netdev collector, which collects network devices statistics. Enabled by default. netclass NodeExporterCollectorNetClassConfig Defines the configuration of the netclass collector, which collects information about network devices. Enabled by default. buddyinfo NodeExporterCollectorBuddyInfoConfig Defines the configuration of the buddyinfo collector, which collects statistics about memory fragmentation from the node_buddyinfo_blocks metric. This metric collects data from /proc/buddyinfo . Disabled by default. mountstats NodeExporterCollectorMountStatsConfig Defines the configuration of the mountstats collector, which collects statistics about NFS volume I/O activities. Disabled by default. ksmd NodeExporterCollectorKSMDConfig Defines the configuration of the ksmd collector, which collects statistics from the kernel same-page merger daemon. Disabled by default. processes NodeExporterCollectorProcessesConfig Defines the configuration of the processes collector, which collects statistics from processes and threads running in the system. Disabled by default. systemd NodeExporterCollectorSystemdConfig Defines the configuration of the systemd collector, which collects statistics on the systemd daemon and its managed services. Disabled by default. 11.11. NodeExporterCollectorCpufreqConfig 11.11.1. Description Use the NodeExporterCollectorCpufreqConfig resource to enable or disable the cpufreq collector of the node-exporter agent. By default, the cpufreq collector is disabled. Under certain circumstances, enabling the cpufreq collector increases CPU usage on machines with many cores. If you enable this collector and have machines with many cores, monitor your systems closely for excessive CPU usage. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the cpufreq collector. 11.12. NodeExporterCollectorKSMDConfig 11.12.1. Description Use the NodeExporterCollectorKSMDConfig resource to enable or disable the ksmd collector of the node-exporter agent. By default, the ksmd collector is disabled. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the ksmd collector. 11.13. NodeExporterCollectorMountStatsConfig 11.13.1. Description Use the NodeExporterCollectorMountStatsConfig resource to enable or disable the mountstats collector of the node-exporter agent. By default, the mountstats collector is disabled. If you enable the collector, the following metrics become available: node_mountstats_nfs_read_bytes_total , node_mountstats_nfs_write_bytes_total , and node_mountstats_nfs_operations_requests_total . Be aware that these metrics can have a high cardinality. If you enable this collector, closely monitor any increases in memory usage for the prometheus-k8s pods. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the mountstats collector. 11.14. NodeExporterCollectorNetClassConfig 11.14.1. Description Use the NodeExporterCollectorNetClassConfig resource to enable or disable the netclass collector of the node-exporter agent. By default, the netclass collector is enabled. If you disable this collector, these metrics become unavailable: node_network_info , node_network_address_assign_type , node_network_carrier , node_network_carrier_changes_total , node_network_carrier_up_changes_total , node_network_carrier_down_changes_total , node_network_device_id , node_network_dormant , node_network_flags , node_network_iface_id , node_network_iface_link , node_network_iface_link_mode , node_network_mtu_bytes , node_network_name_assign_type , node_network_net_dev_group , node_network_speed_bytes , node_network_transmit_queue_length , and node_network_protocol_type . Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the netclass collector. useNetlink bool A Boolean flag that activates the netlink implementation of the netclass collector. The default value is true , which activates the netlink mode. This implementation improves the performance of the netclass collector. 11.15. NodeExporterCollectorNetDevConfig 11.15.1. Description Use the NodeExporterCollectorNetDevConfig resource to enable or disable the netdev collector of the node-exporter agent. By default, the netdev collector is enabled. If disabled, these metrics become unavailable: node_network_receive_bytes_total , node_network_receive_compressed_total , node_network_receive_drop_total , node_network_receive_errs_total , node_network_receive_fifo_total , node_network_receive_frame_total , node_network_receive_multicast_total , node_network_receive_nohandler_total , node_network_receive_packets_total , node_network_transmit_bytes_total , node_network_transmit_carrier_total , node_network_transmit_colls_total , node_network_transmit_compressed_total , node_network_transmit_drop_total , node_network_transmit_errs_total , node_network_transmit_fifo_total , and node_network_transmit_packets_total . Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the netdev collector. 11.16. NodeExporterCollectorProcessesConfig 11.16.1. Description Use the NodeExporterCollectorProcessesConfig resource to enable or disable the processes collector of the node-exporter agent. If the collector is enabled, the following metrics become available: node_processes_max_processes , node_processes_pids , node_processes_state , node_processes_threads , node_processes_threads_state . The metric node_processes_state and node_processes_threads_state can have up to five series each, depending on the state of the processes and threads. The possible states of a process or a thread are: D (UNINTERRUPTABLE_SLEEP), R (RUNNING & RUNNABLE), S (INTERRUPTABLE_SLEEP), T (STOPPED), or Z (ZOMBIE). By default, the processes collector is disabled. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the processes collector. 11.17. NodeExporterCollectorSystemdConfig 11.17.1. Description Use the NodeExporterCollectorSystemdConfig resource to enable or disable the systemd collector of the node-exporter agent. By default, the systemd collector is disabled. If enabled, the following metrics become available: node_systemd_system_running , node_systemd_units , node_systemd_version . If the unit uses a socket, it also generates the following metrics: node_systemd_socket_accepted_connections_total , node_systemd_socket_current_connections , node_systemd_socket_refused_connections_total . You can use the units parameter to select the systemd units to be included by the systemd collector. The selected units are used to generate the node_systemd_unit_state metric, which shows the state of each systemd unit. However, this metric's cardinality might be high (at least five series per unit per node). If you enable this collector with a long list of selected units, closely monitor the prometheus-k8s deployment for excessive memory usage. Note that the node_systemd_timer_last_trigger_seconds metric is only shown if you have configured the value of the units parameter as logrotate.timer . Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the systemd collector. units []string A list of regular expression (regex) patterns that match systemd units to be included by the systemd collector. By default, the list is empty, so the collector exposes no metrics for systemd units. 11.18. NodeExporterCollectorTcpStatConfig 11.18.1. Description The NodeExporterCollectorTcpStatConfig resource works as an on/off switch for the tcpstat collector of the node-exporter agent. By default, the tcpstat collector is disabled. Appears in: NodeExporterCollectorConfig Property Type Description enabled bool A Boolean flag that enables or disables the tcpstat collector. 11.19. NodeExporterConfig 11.19.1. Description The NodeExporterConfig resource defines settings for the node-exporter agent. Appears in: ClusterMonitoringConfiguration Property Type Description collectors NodeExporterCollectorConfig Defines which collectors are enabled and their additional configuration parameters. maxProcs uint32 The target number of CPUs on which the node-exporter's process will run. The default value is 0 , which means that node-exporter runs on all CPUs. If a kernel deadlock occurs or if performance degrades when reading from sysfs concurrently, you can change this value to 1 , which limits node-exporter to running on one CPU. For nodes with a high CPU count, you can set the limit to a low number, which saves resources by preventing Go routines from being scheduled to run on all CPUs. However, I/O performance degrades if the maxProcs value is set too low and there are many metrics to collect. ignoredNetworkDevices *[]string A list of network devices, defined as regular expressions, that you want to exclude from the relevant collector configuration such as netdev and netclass . If no list is specified, the Cluster Monitoring Operator uses a predefined list of devices to be excluded to minimize the impact on memory usage. If the list is empty, no devices are excluded. If you modify this setting, monitor the prometheus-k8s deployment closely for excessive memory usage. resources *v1.ResourceRequirements Defines resource requests and limits for the NodeExporter container. 11.20. OpenShiftStateMetricsConfig 11.20.1. Description The OpenShiftStateMetricsConfig resource defines settings for the openshift-state-metrics agent. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the OpenShiftStateMetrics container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. 11.21. PrometheusK8sConfig 11.21.1. Description The PrometheusK8sConfig resource defines settings for the Prometheus component. Appears in: ClusterMonitoringConfiguration Property Type Description additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures additional Alertmanager instances that receive alerts from the Prometheus component. By default, no additional Alertmanager instances are configured. enforcedBodySizeLimit string Enforces a body size limit for Prometheus scraped metrics. If a scraped target's body response is larger than the limit, the scrape will fail. The following values are valid: an empty value to specify no limit, a numeric value in Prometheus size format (such as 64MB ), or the string automatic , which indicates that the limit will be automatically calculated based on cluster capacity. The default value is empty, which indicates no limit. externalLabels map[string]string Defines labels to be added to any time series or alerts when communicating with external systems such as federation, remote storage, and Alertmanager. By default, no labels are added. logLevel string Defines the log level setting for Prometheus. The possible values are: error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. queryLogFile string Specifies the file to which PromQL queries are logged. This setting can be either a filename, in which case the queries are saved to an emptyDir volume at /var/log/prometheus , or a full path to a location where an emptyDir volume will be mounted and the queries saved. Writing to /dev/stderr , /dev/stdout or /dev/null is supported, but writing to any other /dev/ path is not supported. Relative paths are also not supported. By default, PromQL queries are not logged. remoteWrite [] RemoteWriteSpec Defines the remote write configuration, including URL, authentication, and relabeling settings. resources *v1.ResourceRequirements Defines resource requests and limits for the Prometheus container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 15d . retentionSize string Defines the maximum amount of disk space used by data blocks plus the write-ahead log (WAL). Supported values are B , KB , KiB , MB , MiB , GB , GiB , TB , TiB , PB , PiB , EB , and EiB . By default, no limit is defined. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. collectionProfile CollectionProfile Defines the metrics collection profile that Prometheus uses to collect metrics from the platform components. Supported values are full or minimal . In the full profile (default), Prometheus collects all metrics that are exposed by the platform components. In the minimal profile, Prometheus only collects metrics necessary for the default platform alerts, recording rules, telemetry, and console dashboards. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Prometheus. Use this setting to configure the persistent volume claim, including storage class, volume size and name. 11.22. PrometheusOperatorConfig 11.22.1. Description The PrometheusOperatorConfig resource defines settings for the Prometheus Operator component. Appears in: ClusterMonitoringConfiguration , UserWorkloadConfiguration Property Type Description logLevel string Defines the log level settings for Prometheus Operator. The possible values are error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the PrometheusOperator container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. 11.23. PrometheusOperatorAdmissionWebhookConfig 11.23.1. Description The PrometheusOperatorAdmissionWebhookConfig resource defines settings for the admission webhook workload for Prometheus Operator. Appears in: ClusterMonitoringConfiguration Property Type Description resources *v1.ResourceRequirements Defines resource requests and limits for the prometheus-operator-admission-webhook container. topologySpreadConstraints []v1.TopologySpreadConstraint Defines a pod's topology spread constraints. 11.24. PrometheusRestrictedConfig 11.24.1. Description The PrometheusRestrictedConfig resource defines the settings for the Prometheus component that monitors user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description scrapeInterval string Configures the default interval between consecutive scrapes in case the ServiceMonitor or PodMonitor resource does not specify any value. The interval must be set between 5 seconds and 5 minutes. The value can be expressed in: seconds (for example 30s ), minutes (for example 1m ) or a mix of minutes and seconds (for example 1m30s ). The default value is 30s . evaluationInterval string Configures the default interval between rule evaluations in case the PrometheusRule resource does not specify any value. The interval must be set between 5 seconds and 5 minutes. The value can be expressed in: seconds (for example 30s ), minutes (for example 1m ) or a mix of minutes and seconds (for example 1m30s ). It only applies to PrometheusRule resources with the openshift.io/prometheus-rule-evaluation-scope=\"leaf-prometheus\" label. The default value is 30s . additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures additional Alertmanager instances that receive alerts from the Prometheus component. By default, no additional Alertmanager instances are configured. enforcedLabelLimit *uint64 Specifies a per-scrape limit on the number of labels accepted for a sample. If the number of labels exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedLabelNameLengthLimit *uint64 Specifies a per-scrape limit on the length of a label name for a sample. If the length of a label name exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedLabelValueLengthLimit *uint64 Specifies a per-scrape limit on the length of a label value for a sample. If the length of a label value exceeds this limit after metric relabeling, the entire scrape is treated as failed. The default value is 0 , which means that no limit is set. enforcedSampleLimit *uint64 Specifies a global limit on the number of scraped samples that will be accepted. This setting overrides the SampleLimit value set in any user-defined ServiceMonitor or PodMonitor object if the value is greater than enforcedTargetLimit . Administrators can use this setting to keep the overall number of samples under control. The default value is 0 , which means that no limit is set. enforcedTargetLimit *uint64 Specifies a global limit on the number of scraped targets. This setting overrides the TargetLimit value set in any user-defined ServiceMonitor or PodMonitor object if the value is greater than enforcedSampleLimit . Administrators can use this setting to keep the overall number of targets under control. The default value is 0 . externalLabels map[string]string Defines labels to be added to any time series or alerts when communicating with external systems such as federation, remote storage, and Alertmanager. By default, no labels are added. logLevel string Defines the log level setting for Prometheus. The possible values are error , warn , info , and debug . The default setting is info . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. queryLogFile string Specifies the file to which PromQL queries are logged. This setting can be either a filename, in which case the queries are saved to an emptyDir volume at /var/log/prometheus , or a full path to a location where an emptyDir volume will be mounted and the queries saved. Writing to /dev/stderr , /dev/stdout or /dev/null is supported, but writing to any other /dev/ path is not supported. Relative paths are also not supported. By default, PromQL queries are not logged. remoteWrite [] RemoteWriteSpec Defines the remote write configuration, including URL, authentication, and relabeling settings. resources *v1.ResourceRequirements Defines resource requests and limits for the Prometheus container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 24h . retentionSize string Defines the maximum amount of disk space used by data blocks plus the write-ahead log (WAL). Supported values are B , KB , KiB , MB , MiB , GB , GiB , TB , TiB , PB , PiB , EB , and EiB . The default value is nil . tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Prometheus. Use this setting to configure the storage class and size of a volume. 11.25. RemoteWriteSpec 11.25.1. Description The RemoteWriteSpec resource defines the settings for remote write storage. 11.25.2. Required url Appears in: PrometheusK8sConfig , PrometheusRestrictedConfig Property Type Description authorization *monv1.SafeAuthorization Defines the authorization settings for remote write storage. basicAuth *monv1.BasicAuth Defines Basic authentication settings for the remote write endpoint URL. bearerTokenFile string Defines the file that contains the bearer token for the remote write endpoint. However, because you cannot mount secrets in a pod, in practice you can only reference the token of the service account. headers map[string]string Specifies the custom HTTP headers to be sent along with each remote write request. Headers set by Prometheus cannot be overwritten. metadataConfig *monv1.MetadataConfig Defines settings for sending series metadata to remote write storage. name string Defines the name of the remote write queue. This name is used in metrics and logging to differentiate queues. If specified, this name must be unique. oauth2 *monv1.OAuth2 Defines OAuth2 authentication settings for the remote write endpoint. proxyUrl string Defines an optional proxy URL. If the cluster-wide proxy is enabled, it replaces the proxyUrl setting. The cluster-wide proxy supports both HTTP and HTTPS proxies, with HTTPS taking precedence. queueConfig *monv1.QueueConfig Allows tuning configuration for remote write queue parameters. remoteTimeout string Defines the timeout value for requests to the remote write endpoint. sendExemplars *bool Enables sending exemplars via remote write. When enabled, this setting configures Prometheus to store a maximum of 100,000 exemplars in memory. This setting only applies to user-defined monitoring and is not applicable to core platform monitoring. sigv4 *monv1.Sigv4 Defines AWS Signature Version 4 authentication settings. tlsConfig *monv1.SafeTLSConfig Defines TLS authentication settings for the remote write endpoint. url string Defines the URL of the remote write endpoint to which samples will be sent. writeRelabelConfigs []monv1.RelabelConfig Defines the list of remote write relabel configurations. 11.26. TLSConfig 11.26.1. Description The TLSConfig resource configures the settings for TLS connections. 11.26.2. Required insecureSkipVerify Appears in: AdditionalAlertmanagerConfig Property Type Description ca *v1.SecretKeySelector Defines the secret key reference containing the Certificate Authority (CA) to use for the remote host. cert *v1.SecretKeySelector Defines the secret key reference containing the public certificate to use for the remote host. key *v1.SecretKeySelector Defines the secret key reference containing the private key to use for the remote host. serverName string Used to verify the hostname on the returned certificate. insecureSkipVerify bool When set to true , disables the verification of the remote host's certificate and name. 11.27. TelemeterClientConfig 11.27.1. Description TelemeterClientConfig defines settings for the Telemeter Client component. 11.27.2. Required nodeSelector tolerations Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the TelemeterClient container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. 11.28. ThanosQuerierConfig 11.28.1. Description The ThanosQuerierConfig resource defines settings for the Thanos Querier component. Appears in: ClusterMonitoringConfiguration Property Type Description enableRequestLogging bool A Boolean flag that enables or disables request logging. The default value is false . logLevel string Defines the log level setting for Thanos Querier. The possible values are error , warn , info , and debug . The default value is info . enableCORS bool A Boolean flag that enables setting CORS headers. The headers allow access from any origin. The default value is false . nodeSelector map[string]string Defines the nodes on which the pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Thanos Querier container. tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. 11.29. ThanosRulerConfig 11.29.1. Description The ThanosRulerConfig resource defines configuration for the Thanos Ruler instance for user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description additionalAlertmanagerConfigs [] AdditionalAlertmanagerConfig Configures how the Thanos Ruler component communicates with additional Alertmanager instances. The default value is nil . evaluationInterval string Configures the default interval between Prometheus rule evaluations in case the PrometheusRule resource does not specify any value. The interval must be set between 5 seconds and 5 minutes. The value can be expressed in: seconds (for example 30s ), minutes (for example 1m ) or a mix of minutes and seconds (for example 1m30s ). It applies to PrometheusRule resources without the openshift.io/prometheus-rule-evaluation-scope=\"leaf-prometheus\" label. The default value is 15s . logLevel string Defines the log level setting for Thanos Ruler. The possible values are error , warn , info , and debug . The default value is info . nodeSelector map[string]string Defines the nodes on which the Pods are scheduled. resources *v1.ResourceRequirements Defines resource requests and limits for the Alertmanager container. retention string Defines the duration for which Prometheus retains data. This definition must be specified using the following regular expression pattern: [0-9]+(ms|s|m|h|d|w|y) (ms = milliseconds, s= seconds,m = minutes, h = hours, d = days, w = weeks, y = years). The default value is 15d . tolerations []v1.Toleration Defines tolerations for the pods. topologySpreadConstraints []v1.TopologySpreadConstraint Defines the pod's topology spread constraints. volumeClaimTemplate *monv1.EmbeddedPersistentVolumeClaim Defines persistent storage for Thanos Ruler. Use this setting to configure the storage class and size of a volume. 11.30. UserWorkloadConfig 11.30.1. Description The UserWorkloadConfig resource defines settings for the monitoring of user-defined projects. Appears in: ClusterMonitoringConfiguration Property Type Description rulesWithoutLabelEnforcementAllowed *bool A Boolean flag that enables or disables the ability to deploy user-defined PrometheusRules objects for which the namespace label is not enforced to the namespace of the object. Such objects should be created in a namespace configured under the namespacesWithoutLabelEnforcement property of the UserWorkloadConfiguration resource. The default value is true . 11.31. UserWorkloadConfiguration 11.31.1. Description The UserWorkloadConfiguration resource defines the settings responsible for user-defined projects in the user-workload-monitoring-config config map in the openshift-user-workload-monitoring namespace. You can only enable UserWorkloadConfiguration after you have set enableUserWorkload to true in the cluster-monitoring-config config map under the openshift-monitoring namespace. Property Type Description alertmanager * AlertmanagerUserWorkloadConfig Defines the settings for the Alertmanager component in user workload monitoring. prometheus * PrometheusRestrictedConfig Defines the settings for the Prometheus component in user workload monitoring. prometheusOperator * PrometheusOperatorConfig Defines the settings for the Prometheus Operator component in user workload monitoring. thanosRuler * ThanosRulerConfig Defines the settings for the Thanos Ruler component in user workload monitoring. namespacesWithoutLabelEnforcement []string Defines the list of namespaces for which Prometheus and Thanos Ruler in user-defined monitoring do not enforce the namespace label value in PrometheusRule objects. The namespacesWithoutLabelEnforcement property allows users to define recording and alerting rules that can query across multiple projects (not limited to user-defined projects) instead of deploying identical PrometheusRule objects in each user project. To make the resulting alerts and metrics visible to project users, the query expressions should return a namespace label with a non-empty value. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/monitoring/config-map-reference-for-the-cluster-monitoring-operator |
Chapter 2. Installation | Chapter 2. Installation You can install Red Hat Single Sign-On by downloading a ZIP file and unzipping it, or by using an RPM. This chapter reviews system requirements as well as the directory structure. 2.1. System Requirements These are the requirements to run the Red Hat Single Sign-On authentication server: Can run on any operating system that runs Java Java 8 JDK zip or gzip and tar At least 512M of RAM At least 1G of diskspace A shared external database like PostgreSQL, MySQL, Oracle, etc. Red Hat Single Sign-On requires an external shared database if you want to run in a cluster. Please see the database configuration section of this guide for more information. Network multicast support on your machine if you want to run in a cluster. Red Hat Single Sign-On can be clustered without multicast, but this requires a bunch of configuration changes. Please see the clustering section of this guide for more information. On Linux, it is recommended to use /dev/urandom as a source of random data to prevent Red Hat Single Sign-On hanging due to lack of available entropy, unless /dev/random usage is mandated by your security policy. To achieve that on Oracle JDK 8 and OpenJDK 8, set the java.security.egd system property on startup to file:/dev/urandom . 2.2. Installing RH-SSO from a ZIP File The Red Hat Single Sign-On server download ZIP file contains the scripts and binaries to run the Red Hat Single Sign-On server. You install the 7.4.0.GA server first, then the 7.4.10.GA server patch. Procedure Go to the Red Hat customer portal . Download the Red Hat Single Sign-On 7.4.0.GA server. Unpack the ZIP file using the appropriate unzip utility, such as unzip, or Expand-Archive. Return to the Red Hat customer portal . Click the Patches tab. Download the Red Hat Single Sign-On 7.4.10.GA server patch. Place the downloaded file in a directory you choose. Go to the bin directory of JBoss EAP. Start the JBoss EAP command line interface. Linux/Unix USD jboss-cli.sh Windows > jboss-cli.bat Apply the patch. USD patch apply <path-to-zip>/rh-sso-7.4.10-patch.zip Additional resources For more details on applying patches, see Patching a ZIP/Installer Installation . 2.3. Installing RH-SSO from an RPM Note With Red Hat Enterprise Linux 7 and 8, the term channel was replaced with the term repository. In these instructions only the term repository is used. You must subscribe to both the JBoss EAP 7.3 and RH-SSO 7.4 repositories before you can install RH-SSO from an RPM. Note You cannot continue to receive upgrades to EAP RPMs but stop receiving updates for RH-SSO. 2.3.1. Subscribing to the JBoss EAP 7.3 Repository Prerequisites Ensure that your Red Hat Enterprise Linux system is registered to your account using Red Hat Subscription Manager. For more information see the Red Hat Subscription Management documentation . If you are already subscribed to another JBoss EAP repository, you must unsubscribe from that repository first. For Red Hat Enterprise Linux 6, 7: Using Red Hat Subscription Manager, subscribe to the JBoss EAP 7.3 repository using the following command. Replace <RHEL_VERSION> with either 6 or 7 depending on your Red Hat Enterprise Linux version. subscription-manager repos --enable=jb-eap-7.3-for-rhel-<RHEL_VERSION>-server-rpms --enable=rhel-<RHEL_VERSION>-server-rpms For Red Hat Enterprise Linux 8: Using Red Hat Subscription Manager, subscribe to the JBoss EAP 7.3 repository using the following command: subscription-manager repos --enable=jb-eap-7.3-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms 2.3.2. Subscribing to the RH-SSO 7.4 Repository and Installing RH-SSO 7.4 Prerequisites Ensure that your Red Hat Enterprise Linux system is registered to your account using Red Hat Subscription Manager. For more information see the Red Hat Subscription Management documentation . Ensure that you have already subscribed to the JBoss EAP 7.3 repository. For more information see Subscribing to the JBoss EAP 7.3 repository . To subscribe to the RH-SSO 7.4 repository and install RH-SSO 7.4, complete the following steps: For Red Hat Enterprise Linux 6, 7: Using Red Hat Subscription Manager, subscribe to the RH-SSO 7.4 repository using the following command. Replace <RHEL_VERSION> with either 6 or 7 depending on your Red Hat Enterprise Linux version. subscription-manager repos --enable=rh-sso-7.4-for-rhel-<RHEL-VERSION>-server-rpms For Red Hat Enterprise Linux 8: Using Red Hat Subscription Manager, subscribe to the RH-SSO 7.4 repository using the following command: subscription-manager repos --enable=rh-sso-7.4-for-rhel-8-x86_64-rpms For Red Hat Enterprise Linux 6, 7: Install RH-SSO from your subscribed RH-SSO 7.4 repository using the following command: For Red Hat Enterprise Linux 8: Install RH-SSO from your subscribed RH-SSO 7.4 repository using the following command: Your installation is complete. The default RH-SSO_HOME path for the RPM installation is /opt/rh/rh-sso7/root/usr/share/keycloak. Additional resources For details on installing the 7.4.10.GA patch for Red Hat Single Sign-On, see RPM patching . 2.4. Distribution Directory Structure This chapter walks you through the directory structure of the server distribution. Let's examine the purpose of some of the directories: bin/ This contains various scripts to either boot the server or perform some other management action on the server. domain/ This contains configuration files and working directory when running Red Hat Single Sign-On in domain mode . modules/ These are all the Java libraries used by the server. standalone/ This contains configuration files and working directory when running Red Hat Single Sign-On in standalone mode . standalone/deployments/ If you are writing extensions to Red Hat Single Sign-On, you can put your extensions here. See the Server Developer Guide for more information on this. themes/ This directory contains all the html, style sheets, JavaScript files, and images used to display any UI screen displayed by the server. Here you can modify an existing theme or create your own. See the Server Developer Guide for more information on this. | [
"jboss-cli.sh",
"> jboss-cli.bat",
"patch apply <path-to-zip>/rh-sso-7.4.10-patch.zip",
"subscription-manager repos --enable=jb-eap-7.3-for-rhel-<RHEL_VERSION>-server-rpms --enable=rhel-<RHEL_VERSION>-server-rpms",
"subscription-manager repos --enable=jb-eap-7.3-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"subscription-manager repos --enable=rh-sso-7.4-for-rhel-<RHEL-VERSION>-server-rpms",
"subscription-manager repos --enable=rh-sso-7.4-for-rhel-8-x86_64-rpms",
"groupinstall rh-sso7",
"dnf groupinstall rh-sso7"
] | https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/server_installation_and_configuration_guide/installation |
Red Hat OpenShift Cluster Manager | Red Hat OpenShift Cluster Manager OpenShift Dedicated 4 Configuring OpenShift Dedicated clusters using OpenShift Cluster Manager Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/red_hat_openshift_cluster_manager/index |
Chapter 3. Logging into automation controller after installation | Chapter 3. Logging into automation controller after installation After you install automation controller, you must log in. Procedure With the login information provided after your installation completed, open a web browser and log in to the automation controller by navigating to its server URL at: https://<CONTROLLER_SERVER_NAME>/ Use the credentials specified during the installation process to login: The default username is admin . The password for admin is the value specified. Click the More Actions icon ... to the desired user. Click Edit . Edit the required details and click Save . | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/automation_controller_user_guide/controller-login |
13.5. Hot Rod Headers | 13.5. Hot Rod Headers 13.5.1. Hot Rod Header Data Types All keys and values used for Hot Rod in Red Hat JBoss Data Grid are stored as byte arrays. Certain header values, such as those for REST and Memcached, are stored using the following data types instead: Table 13.1. Header Data Types Data Type Size Details vInt Between 1-5 bytes. Unsigned variable length integer values. vLong Between 1-9 bytes. Unsigned variable length long values. string - Strings are always represented using UTF-8 encoding. Report a bug 13.5.2. Request Header When using Hot Rod to access Red Hat JBoss Data Grid, the contents of the request header consist of the following: Table 13.2. Request Header Fields Field Name Data Type/Size Details Magic 1 byte Indicates whether the header is a request header or response header. Message ID vLong Contains the message ID. Responses use this unique ID when responding to a request. This allows Hot Rod clients to implement the protocol in an asynchronous manner. Version 1 byte Contains the Hot Rod server version. Opcode 1 byte Contains the relevant operation code. In a request header, opcode can only contain the request operation codes. Cache Name Length vInt Stores the length of the cache name. If Cache Name Length is set to 0 and no value is supplied for Cache Name, the operation interacts with the default cache. Cache Name string Stores the name of the target cache for the specified operation. This name must match the name of a predefined cache in the cache configuration file. Flags vInt Contains a numeric value of variable length that represents flags passed to the system. Each bit represents a flag, except the most significant bit, which is used to determine whether more bytes must be read. Using a bit to represent each flag facilitates the representation of flag combinations in a condensed manner. Client Intelligence 1 byte Contains a value that indicates the client capabilities to the server. Topology ID vInt Contains the last known view ID in the client. Basic clients supply the value 0 for this field. Clients that support topology or hash information supply the value 0 until the server responds with the current view ID, which is subsequently used until a new view ID is returned by the server to replace the current view ID. Transaction Type 1 byte Contains a value that represents one of two known transaction types. Currently, the only supported value is 0 . Transaction ID byte-array Contains a byte array that uniquely identifies the transaction associated with the call. The transaction type determines the length of this byte array. If the value for Transaction Type was set to 0 , no Transaction ID is present. Report a bug 13.5.3. Response Header When using Hot Rod to access Red Hat JBoss Data Grid, the contents of the response header consist of the following: Table 13.3. Response Header Fields Field Name Data Type Details Magic 1 byte Indicates whether the header is a request or response header. Message ID vLong Contains the message ID. This unique ID is used to pair the response with the original request. This allows Hot Rod clients to implement the protocol in an asynchronous manner. Opcode 1 byte Contains the relevant operation code. In a response header, opcode can only contain the response operation codes. Status 1 byte Contains a code that represents the status of the response. Topology Change Marker 1 byte Contains a marker byte that indicates whether the response is included in the topology change information. Report a bug 13.5.4. Topology Change Headers When using Hot Rod to access Red Hat JBoss Data Grid, response headers respond to changes in the cluster or view formation by looking for clients that can distinguish between different topologies or hash distributions. The Hot Rod server compares the current topology ID and the topology ID sent by the client and, if the two differ, it returns a new topology ID . Report a bug 13.5.4.1. Topology Change Marker Values The following is a list of valid values for the Topology Change Marker field in a response header: Table 13.4. Topology Change Marker Field Values Value Details 0 No topology change information is added. 1 Topology change information is added. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 13.5.4.2. Topology Change Headers for Topology-Aware Clients The response header sent to topology-aware clients when a topology change is returned by the server includes the following elements: Table 13.5. Topology Change Header Fields Response Header Fields Data Type/Size Details Response Header with Topology Change Marker - - Topology ID vInt - Num Servers in Topology vInt Contains the number of Hot Rod servers running in the cluster. This value can be a subset of the entire cluster if only some nodes are running Hot Rod servers. mX: Host/IP Length vInt Contains the length of the hostname or IP address of an individual cluster member. Variable length allows this element to include hostnames, IPv4 and IPv6 addresses. mX: Host/IP Address string Contains the hostname or IP address of an individual cluster member. The Hot Rod client uses this information to access the individual cluster member. mX: Port Unsigned Short. 2 bytes Contains the port used by Hot Rod clients to communicate with the cluster member. The three entries with the prefix mX , are repeated for each server in the topology. The first server in the topology's information fields will be prefixed with m1 and the numerical value is incremented by one for each additional server till the value of X equals the number of servers specified in the num servers in topology field. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 13.5.4.3. Topology Change Headers for Hash Distribution-Aware Clients The response header sent to clients when a topology change is returned by the server includes the following elements: Table 13.6. Topology Change Header Fields Field Data Type/Size Details Response Header with Topology Change Marker - - Topology ID vInt - Number Key Owners Unsigned short. 2 bytes. Contains the number of globally configured copies for each distributed key. Contains the value 0 if distribution is not configured on the cache. Hash Function Version 1 byte Contains a pointer to the hash function in use. Contains the value 0 if distribution is not configured on the cache. Hash Space Size vInt Contains the modulus used by JBoss Data Grid for all module arithmetic related to hash code generation. Clients use this information to apply the correct hash calculations to the keys. Contains the value 0 if distribution is not configured on the cache. Number servers in topology vInt Contains the number of Hot Rod servers running in the cluster. This value can be a subset of the entire cluster if only some nodes are running Hot Rod servers. This value also represents the number of host to port pairings included in the header. Number Virtual Nodes Owners vInt Contains the number of configured virtual nodes. Contains the value 0 if no virtual nodes are configured or if distribution is not configured on the cache. mX: Host/IP Length vInt Contains the length of the hostname or IP address of an individual cluster member. Variable length allows this element to include hostnames, IPv4 and IPv6 addresses. mX: Host/IP Address string Contains the hostname or IP address of an individual cluster member. The Hot Rod client uses this information to access the individual cluster member. mX: Port Unsigned short. 2 bytes. Contains the port used by Hot Rod clients to communicate with the cluster member. mX: Hashcode 4 bytes. The three entries with the prefix mX , are repeated for each server in the topology. The first server in the topology's information fields will be prefixed with m1 and the numerical value is incremented by one for each additional server till the value of X equals the number of servers specified in the num servers in topology field. Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-hot_rod_headers |
Chapter 4. Telco RAN DU reference design specifications | Chapter 4. Telco RAN DU reference design specifications The telco RAN DU reference design specifications (RDS) describes the configuration for clusters running on commodity hardware to host 5G workloads in the Radio Access Network (RAN). It captures the recommended, tested, and supported configurations to get reliable and repeatable performance for a cluster running the telco RAN DU profile. 4.1. Reference design specifications for telco RAN DU 5G deployments Red Hat and certified partners offer deep technical expertise and support for networking and operational capabilities required to run telco applications on OpenShift Container Platform 4.18 clusters. Red Hat's telco partners require a well-integrated, well-tested, and stable environment that can be replicated at scale for enterprise 5G solutions. The telco core and RAN DU reference design specifications (RDS) outline the recommended solution architecture based on a specific version of OpenShift Container Platform. Each RDS describes a tested and validated platform configuration for telco core and RAN DU use models. The RDS ensures an optimal experience when running your applications by defining the set of critical KPIs for telco 5G core and RAN DU. Following the RDS minimizes high severity escalations and improves application stability. 5G use cases are evolving and your workloads are continually changing. Red Hat is committed to iterating over the telco core and RAN DU RDS to support evolving requirements based on customer and partner feedback. The reference configuration includes the configuration of the far edge clusters and hub cluster components. The reference configurations in this document are deployed using a centrally managed hub cluster infrastructure as shown in the following image. Figure 4.1. Telco RAN DU deployment architecture 4.2. Reference design scope The telco core and telco RAN reference design specifications (RDS) capture the recommended, tested, and supported configurations to get reliable and repeatable performance for clusters running the telco core and telco RAN profiles. Each RDS includes the released features and supported configurations that are engineered and validated for clusters to run the individual profiles. The configurations provide a baseline OpenShift Container Platform installation that meets feature and KPI targets. Each RDS also describes expected variations for each individual configuration. Validation of each RDS includes many long duration and at-scale tests. Note The validated reference configurations are updated for each major Y-stream release of OpenShift Container Platform. Z-stream patch releases are periodically re-tested against the reference configurations. 4.3. Deviations from the reference design Deviating from the validated telco core and telco RAN DU reference design specifications (RDS) can have significant impact beyond the specific component or feature that you change. Deviations require analysis and engineering in the context of the complete solution. Important All deviations from the RDS should be analyzed and documented with clear action tracking information. Due diligence is expected from partners to understand how to bring deviations into line with the reference design. This might require partners to provide additional resources to engage with Red Hat to work towards enabling their use case to achieve a best in class outcome with the platform. This is critical for the supportability of the solution and ensuring alignment across Red Hat and with partners. Deviation from the RDS can have some or all of the following consequences: It can take longer to resolve issues. There is a risk of missing project service-level agreements (SLAs), project deadlines, end provider performance requirements, and so on. Unapproved deviations may require escalation at executive levels. Note Red Hat prioritizes the servicing of requests for deviations based on partner engagement priorities. 4.4. Engineering considerations for the RAN DU use model The RAN DU use model configures an OpenShift Container Platform cluster running on commodity hardware for hosting RAN distributed unit (DU) workloads. Model and system level considerations are described below. Specific limits, requirements and engineering considerations for individual components are detailed in later sections. Note For details of the RAN DU KPI test results, see the Telco RAN DU reference design specification KPI test results for OpenShift 4.18 . This information is only available to customers and partners. Workloads DU workloads are described in "Telco RAN DU application workloads". DU worker nodes are Intel 3rd Generation Xeon (IceLake) 2.20 GHz or better with host firmware tuned for maximum performance. Resources The maximum number of running pods in the system, inclusive of application workload and OpenShift Container Platform pods, is 120. Resource utilization OpenShift Container Platform resource utilization varies depending on many factors such as the following application workload characteristics: Pod count Type and frequency of probes Messaging rates on the primary or secondary CNI with kernel networking API access rate Logging rates Storage IOPS Resource utilization is measured for clusters configured as follows: The cluster is a single host with single-node OpenShift installed. The cluster runs the representative application workload described in "Reference application workload characteristics". The cluster is managed under the constraints detailed in "Hub cluster management characteristics". Components noted as "optional" in the use model configuration are not included. Note Configuration outside the scope of the RAN DU RDS that do not meet these criteria requires additional analysis to determine the impact on resource utilization and ability to meet KPI targets. You might need to allocate additional cluster resources to meet these requirements. Reference application workload characteristics Uses 15 pods and 30 containers for the vRAN application including its management and control functions Uses an average of 2 ConfigMap and 4 Secret CRs per pod Uses a maximum of 10 exec probes with a frequency of not less than 10 seconds Incremental application load on the kube-apiserver is less than or equal to 10% of the cluster platform usage Note You can extract CPU load can from the platform metrics. For example: USD query=avg_over_time(pod:container_cpu_usage:sum{namespace="openshift-kube-apiserver"}[30m]) Application logs are not collected by the platform log collector Aggregate traffic on the primary CNI is less than 8 MBps Hub cluster management characteristics RHACM is the recommended cluster management solution and is configured to these limits: Use a maximum of 5 RHACM configuration policies with a compliant evaluation interval of not less than 10 minutes. Use a minimal number (up to 10) of managed cluster templates in cluster policies. Use hub-side templating. Disable RHACM addons with the exception of the policyController and configure observability with the default configuration. The following table describes resource utilization under reference application load. Table 4.1. Resource utilization under reference application load Metric Limits Notes OpenShift platform CPU usage Less than 4000mc - 2 cores (4HT) Platform CPU is pinned to reserved cores, including both hyper-threads of each reserved core. The system is engineered to 3 CPUs (3000mc) at steady-state to allow for periodic system tasks and spikes. OpenShift Platform memory Less than 16G 4.5. Telco RAN DU application workloads Develop RAN DU applications that are subject to the following requirements and limitations. Description and limits Develop cloud-native network functions (CNFs) that conform to the latest version of Red Hat best practices for Kubernetes . Use SR-IOV for high performance networking. Use exec probes sparingly and only when no other suitable options are available. Do not use exec probes if a CNF uses CPU pinning. Use other probe implementations, for example, httpGet or tcpSocket . When you need to use exec probes, limit the exec probe frequency and quantity. The maximum number of exec probes must be kept below 10, and frequency must not be set to less than 10 seconds. Exec probes cause much higher CPU usage on management cores compared to other probe types because they require process forking. Note Startup probes require minimal resources during steady-state operation. The limitation on exec probes applies primarily to liveness and readiness probes. Note A test workload that conforms to the dimensions of the reference DU application workload described in this specification can be found at openshift-kni/du-test-workloads . 4.6. Telco RAN DU reference design components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure and deploy clusters to run RAN DU workloads. Figure 4.2. Telco RAN DU reference design components Note Ensure that additional components you include that are not specified in the telco RAN DU profile do not affect the CPU resources allocated to workload applications. Important Out of tree drivers are not supported. 5G RAN application components are not included in the RAN DU profile and must be engineered against resources (CPU) allocated to applications. 4.6.1. Host firmware tuning New in this release No reference design updates in this release Description Tune host firmware settings for optimal performance during initial cluster deployment. For more information, see "Recommended single-node OpenShift cluster configuration for vDU application workloads". Apply tuning settings in the host firmware during initial deployment. See "Managing host firmware settings with GitOps ZTP" for more information. The managed cluster host firmware settings are available on the hub cluster as individual BareMetalHost custom resources (CRs) that are created when you deploy the managed cluster with the ClusterInstance CR and GitOps ZTP. Note Create the ClusterInstance CR based on the provided reference example-sno.yaml CR. Limits and requirements You must enable Hyper-Threading in the host firmware settings Engineering considerations Tune all firmware settings for maximum performance. All settings are expected to be for maximum performance unless tuned for power savings. You can tune host firmware for power savings at the expense of performance as required. Enable secure boot. When secure boot is enabled, only signed kernel modules are loaded by the kernel. Out-of-tree drivers are not supported. Additional resources Managing host firmware settings with GitOps ZTP Configuring host firmware for low latency and high performance Provisioning real-time and low latency workloads 4.6.2. CPU partitioning and performance tuning New in this release No reference design updates in this release Description The RAN DU use model includes cluster performance tuning via PerformanceProfile CRs for low-latency performance. The PerformanceProfile CRs are reconciled by the Node Tuning Operator. The RAN DU use case requires the cluster to be tuned for low-latency performance. For more details about node tuning with the PerformanceProfile CR, see "Tuning nodes for low latency with the performance profile". Limits and requirements The Node Tuning Operator uses the PerformanceProfile CR to configure the cluster. You need to configure the following settings in the telco RAN DU profile PerformanceProfile CR: Set a reserved cpuset of 4 or more, equating to 4 hyper-threads (2 cores) for either of the following CPUs: Intel 3rd Generation Xeon (IceLake) 2.20 GHz or better CPUs with host firmware tuned for maximum performance AMD EPYC Zen 4 CPUs (Genoa, Bergamo, or newer) Note AMD EPYC Zen 4 CPUs (Genoa, Bergamo, or newer) are fully supported. Power consumption evaluations are ongoing. It is recommended to evaluate features, such as per-pod power management, to determine any potential impact on performance. Set the reserved cpuset to include both hyper-thread siblings for each included core. Unreserved cores are available as allocatable CPU for scheduling workloads. Ensure that hyper-thread siblings are not split across reserved and isolated cores. Ensure that reserved and isolated CPUs include all the threads for all cores in the CPU. Include Core 0 for each NUMA node in the reserved CPU set. Set the huge page size to 1G. Only pin OpenShift Container Platform pods which are by default configured as part of the management workload partition to reserved cores. Engineering considerations Meeting the full performance metrics requires use of the RT kernel. If required, you can use the non-RT kernel with corresponding impact to performance. The number of hugepages you configure depends on application workload requirements. Variation in this parameter is expected and allowed. Variation is expected in the configuration of reserved and isolated CPU sets based on selected hardware and additional components in use on the system. The variation must still meet the specified limits. Hardware without IRQ affinity support affects isolated CPUs. To ensure that pods with guaranteed whole CPU QoS have full use of allocated CPUs, all hardware in the server must support IRQ affinity. When workload partitioning is enabled by setting cpuPartitioningMode to AllNodes during deployment, you must allocate enough CPUs to support the operating system, interrupts, and OpenShift Container Platform pods in the PerformanceProfile CR. Additional resources Finding the effective IRQ affinity setting for a node 4.6.3. PTP Operator New in this release No reference design updates in this release Description Configure PTP in cluster nodes with PTPConfig CRs for the RAN DU use case with features like Grandmaster clock (T-GM) support via GPS, ordinary clock (OC), boundary clocks (T-BC), dual boundary clocks, high availability (HA), and optional fast event notification over HTTP. PTP ensures precise timing and reliability in the RAN environment. Limits and requirements Limited to two boundary clocks for nodes with dual NICs and HA Limited to two Westport channel NIC configurations for T-GM Engineering considerations RAN DU RDS configurations are provided for ordinary clocks, boundary clocks, grandmaster clocks, and highly available dual NIC boundary clocks. PTP fast event notifications use ConfigMap CRs to persist subscriber details. Hierarchical event subscription as described in the O-RAN specification is not supported for PTP events. Use the PTP fast events REST API v2. The PTP fast events REST API v1 is deprecated. The REST API v2 is O-RAN Release 3 compliant. 4.6.4. SR-IOV Operator New in this release No reference design updates in this release Description The SR-IOV Operator provisions and configures the SR-IOV CNI and device plugins. Both netdevice (kernel VFs) and vfio (DPDK) devices are supported and applicable to the RAN DU use models. Limits and requirements Use devices that are supported for OpenShift Container Platform. See "Supported devices". SR-IOV and IOMMU enablement in host firmware settings: The SR-IOV Network Operator automatically enables IOMMU on the kernel command line. SR-IOV VFs do not receive link state updates from the PF. If link down detection is required you must configure this at the protocol level. Engineering considerations SR-IOV interfaces with the vfio driver type are typically used to enable additional secondary networks for applications that require high throughput or low latency. Customer variation on the configuration and number of SriovNetwork and SriovNetworkNodePolicy custom resources (CRs) is expected. IOMMU kernel command line settings are applied with a MachineConfig CR at install time. This ensures that the SriovOperator CR does not cause a reboot of the node when adding them. SR-IOV support for draining nodes in parallel is not applicable in a single-node OpenShift cluster. You must include the SriovOperatorConfig CR in your deployment; the CR is not created automatically. This CR is included in the reference configuration policies which are applied during initial deployment. In scenarios where you pin or restrict workloads to specific nodes, the SR-IOV parallel node drain feature will not result in the rescheduling of pods. In these scenarios, the SR-IOV Operator disables the parallel node drain functionality. NICs which do not support firmware updates under secure boot or kernel lockdown must be pre-configured with sufficient virtual functions (VFs) to support the number of VFs needed by the application workload. For Mellanox NICs, the Mellanox vendor plugin must be disabled in the SR-IOV Network Operator. For more information, see "Configuring an SR-IOV network device". To change the MTU value of a virtual function after the pod has started, do not configure the MTU field in the SriovNetworkNodePolicy CR. Instead, configure the Network Manager or use a custom systemd script to set the MTU of the physical function to an appropriate value. For example: # ip link set dev <physical_function> mtu 9000 Additional resources Supported devices Configuring QinQ support for SR-IOV enabled workloads 4.6.5. Logging New in this release No reference design updates in this release Description Use logging to collect logs from the far edge node for remote analysis. The recommended log collector is Vector. Engineering considerations Handling logs beyond the infrastructure and audit logs, for example, from the application workload requires additional CPU and network bandwidth based on additional logging rate. As of OpenShift Container Platform 4.14, Vector is the reference log collector. Use of fluentd in the RAN use models is deprecated. Additional resources Logging 6.0 4.6.6. SRIOV-FEC Operator New in this release No reference design updates in this release Description SRIOV-FEC Operator is an optional 3rd party Certified Operator supporting FEC accelerator hardware. Limits and requirements Starting with FEC Operator v2.7.0: Secure boot is supported vfio drivers for PFs require the usage of a vfio-token that is injected into the pods. Applications in the pod can pass the VF token to DPDK by using EAL parameter --vfio-vf-token . Engineering considerations The SRIOV-FEC Operator uses CPU cores from the isolated CPU set. You can validate FEC readiness as part of the pre-checks for application deployment, for example, by extending the validation policy. Additional resources SRIOV-FEC Operator for Intel(R) vRAN Dedicated Accelerator manager container 4.6.7. Lifecycle Agent New in this release No reference design updates in this release Description The Lifecycle Agent provides local lifecycle management services for single-node OpenShift clusters. Limits and requirements The Lifecycle Agent is not applicable in multi-node clusters or single-node OpenShift clusters with an additional worker. The Lifecycle Agent requires a persistent volume that you create when installing the cluster. For descriptions of partition requirements, see "Configuring a shared container directory between ostree stateroots when using GitOps ZTP". Additional resources Understanding the image-based upgrade for single-node OpenShift clusters Configuring a shared container directory between ostree stateroots when using GitOps ZTP 4.6.8. Local Storage Operator New in this release No reference design updates in this release Description You can create persistent volumes that can be used as PVC resources by applications with the Local Storage Operator. The number and type of PV resources that you create depends on your requirements. Engineering considerations Create backing storage for PV CRs before creating the PV . This can be a partition, a local volume, LVM volume, or full disk. Refer to the device listing in LocalVolume CRs by the hardware path used to access each device to ensure correct allocation of disks and partitions, for example, /dev/disk/by-path/<id> . Logical names (for example, /dev/sda ) are not guaranteed to be consistent across node reboots. 4.6.9. Logical Volume Manager Storage New in this release No reference design updates in this release Description Logical Volume Manager (LVM) Storage is an optional component. It provides dynamic provisioning of both block and file storage by creating logical volumes from local devices that can be consumed as persistent volume claim (PVC) resources by applications. Volume expansion and snapshots are also possible. An example configuration is provided in the RDS with the StorageLVMCluster.yaml file. Limits and requirements In single-node OpenShift clusters, persistent storage must be provided by either LVM Storage or local storage, not both. Volume snapshots are excluded from the reference configuration. Engineering considerations LVM Storage can be used as the local storage implementation for the RAN DU use case. When LVM Storage is used as the storage solution, it replaces the Local Storage Operator, and the CPU required is assigned to the management partition as platform overhead. The reference configuration must include one of these storage solutions but not both. Ensure that sufficient disks or partitions are available for storage requirements. 4.6.10. Workload partitioning New in this release No reference design updates in this release Description Workload partitioning pins OpenShift Container Platform and Day 2 Operator pods that are part of the DU profile to the reserved CPU set and removes the reserved CPU from node accounting. This leaves all unreserved CPU cores available for user workloads. This leaves all non-reserved CPU cores available for user workloads. Workload partitioning is enabled through a capability set in installation parameters: cpuPartitioningMode: AllNodes . The set of management partition cores are set with the reserved CPU set that you configure in the PerformanceProfile CR. Limits and requirements Namespace and Pod CRs must be annotated to allow the pod to be applied to the management partition Pods with CPU limits cannot be allocated to the partition. This is because mutation can change the pod QoS. For more information about the minimum number of CPUs that can be allocated to the management partition, see "Node Tuning Operator". Engineering considerations Workload partitioning pins all management pods to reserved cores. A sufficient number of cores must be allocated to the reserved set to account for operating system, management pods, and expected spikes in CPU use that occur when the workload starts, the node reboots, or other system events happen. Additional resources Workload partitioning 4.6.11. Cluster tuning New in this release No reference design updates in this release Description See "Cluster capabilities" for a full list of components that can be disabled by using the cluster capabilities feature. Limits and requirements Cluster capabilities are not available for installer-provisioned installation methods. Engineering considerations In clusters running OpenShift Container Platform 4.16 and later, the cluster does not automatically revert to cgroup v1 when a PerformanceProfile is applied. If workloads running on the cluster require cgroup v1, the cluster must be configured for cgroup v1. For more information, see "Enabling Linux control group version 1 (cgroup v1)". You should make this configuration as part of the initial cluster deployment. Note Support for cgroup v1 is planned for removal in OpenShift Container Platform 4.19. Clusters running cgroup v1 must transition to cgroup v2. The following table lists the required platform tuning configurations: Table 4.2. Cluster capabilities configurations Feature Description Remove optional cluster capabilities Reduce the OpenShift Container Platform footprint by disabling optional cluster Operators on single-node OpenShift clusters only. Remove all optional Operators except the Node Tuning Operator, Operator Lifecycle Manager, and the Ingress Operator. Configure cluster monitoring Configure the monitoring stack for reduced footprint by doing the following: Disable the local alertmanager and telemeter components. If you use RHACM observability, the CR must be augmented with appropriate additionalAlertManagerConfigs CRs to forward alerts to the hub cluster. Reduce the Prometheus retention period to 24h. Note The RHACM hub cluster aggregates managed cluster metrics. Disable networking diagnostics Disable networking diagnostics for single-node OpenShift because they are not required. Configure a single OperatorHub catalog source Configure the cluster to use a single catalog source that contains only the Operators required for a RAN DU deployment. Each catalog source increases the CPU use on the cluster. Using a single CatalogSource fits within the platform CPU budget. Disable the Console Operator If the cluster was deployed with the console disabled, the Console CR ( ConsoleOperatorDisable.yaml ) is not needed. If the cluster was deployed with the console enabled, you must apply the Console CR. Additional resources Cluster capabilities 4.6.12. Machine configuration New in this release No reference design updates in this release Limits and requirements The CRI-O wipe disable MachineConfig CR assumes that images on disk are static other than during scheduled maintenance in defined maintenance windows. To ensure the images are static, do not set the pod imagePullPolicy field to Always . Table 4.3. Machine configuration options Feature Description Container Runtime Sets the container runtime to crun for all node roles. Kubelet config and container mount namespace hiding Reduces the frequency of kubelet housekeeping and eviction monitoring, which reduces CPU usage SCTP Optional configuration (enabled by default) Kdump Optional configuration (enabled by default) Enables kdump to capture debug information when a kernel panic occurs. The reference CRs that enable kdump have an increased memory reservation based on the set of drivers and kernel modules included in the reference configuration. CRI-O wipe disable Disables automatic wiping of the CRI-O image cache after unclean shutdown SR-IOV-related kernel arguments Include additional SR-IOV-related arguments in the kernel command line Set RCU Normal Systemd service that sets rcu_normal after the system finishes startup One-shot time sync Runs a one-time NTP system time synchronization job for control plane or worker nodes. Additional resources Recommended single-node OpenShift cluster configuration for vDU application workloads . 4.7. Telco RAN DU deployment components The following sections describe the various OpenShift Container Platform components and configurations that you use to configure the hub cluster with RHACM. 4.7.1. Red Hat Advanced Cluster Management New in this release No reference design updates in this release Description RHACM provides Multi Cluster Engine (MCE) installation and ongoing lifecycle management functionality for deployed clusters. You manage cluster configuration and upgrades declaratively by applying Policy custom resources (CRs) to clusters during maintenance windows. RHACM provides the following functionality: Zero touch provisioning (ZTP) of clusters using the MCE component in RHACM. Configuration, upgrades, and cluster status through the RHACM policy controller. During managed cluster installation, RHACM can apply labels to individual nodes as configured through the ClusterInstance CR. Limits and requirements A single hub cluster supports up to 3500 deployed single-node OpenShift clusters with 5 Policy CRs bound to each cluster. Engineering considerations Use RHACM policy hub-side templating to better scale cluster configuration. You can significantly reduce the number of policies by using a single group policy or small number of general group policies where the group and per-cluster values are substituted into templates. Cluster specific configuration: managed clusters typically have some number of configuration values that are specific to the individual cluster. These configurations should be managed using RHACM policy hub-side templating with values pulled from ConfigMap CRs based on the cluster name. To save CPU resources on managed clusters, policies that apply static configurations should be unbound from managed clusters after GitOps ZTP installation of the cluster. Additional resources Using GitOps ZTP to provision clusters at the network far edge Red Hat Advanced Cluster Management for Kubernetes 4.7.2. SiteConfig Operator New in this release No RDS updates in this release Description The SiteConfig Operator is a template-driven solution designed to provision clusters through various installation methods. It introduces the unified ClusterInstance API, which replaces the deprecated SiteConfig API. By leveraging the ClusterInstance API, the SiteConfig Operator improves cluster provisioning by providing the following: Better isolation of definitions from installation methods Unification of Git and non-Git workflows Consistent APIs across installation methods Enhanced scalability Increased flexibility with custom installation templates Valuable insights for troubleshooting deployment issues The SiteConfig Operator provides validated default installation templates to facilitate cluster deployment through both the Assisted Installer and Image-based Installer provisioning methods: Assisted Installer automates the deployment of OpenShift Container Platform clusters by leveraging predefined configurations and validated host setups. It ensures that the target infrastructure meets OpenShift Container Platform requirements. The Assisted Installer streamlines the installation process while minimizing time and complexity compared to manual setup. Image-based Installer expedites the deployment of single-node OpenShift clusters by utilizing preconfigured and validated OpenShift Container Platform seed images. Seed images are preinstalled on target hosts, enabling rapid reconfiguration and deployment. The Image-based Installer is particularly well-suited for remote or disconnected environments, because it simplifies the cluster creation process and significantly reduces deployment time. Limits and requirements A single hub cluster supports up to 3500 deployed single-node OpenShift clusters. 4.7.3. Topology Aware Lifecycle Manager New in this release No reference design updates in this release Description Topology Aware Lifecycle Manager is an Operator that runs only on the hub cluster for managing how changes like cluster upgrades, Operator upgrades, and cluster configuration are rolled out to the network. TALM supports the following features: Progressive rollout of policy updates to fleets of clusters in user configurable batches. Per-cluster actions add ztp-done labels or other user-configurable labels following configuration changes to managed clusters. Precaching of single-node OpenShift clusters images: TALM supports optional pre-caching of OpenShift, OLM Operator, and additional user images to single-node OpenShift clusters before initiating an upgrade. The precaching feature is not applicable when using the recommended image-based upgrade method for upgrading single-node OpenShift clusters. Specifying optional pre-caching configurations with PreCachingConfig CRs. Review the sample reference PreCachingConfig CR for more information. Excluding unused images with configurable filtering. Enabling before and after pre-caching storage space validations with configurable space-required parameters. Limits and requirements Supports concurrent cluster deployment in batches of 400 Pre-caching and backup are limited to single-node OpenShift clusters only Engineering considerations The PreCachingConfig CR is optional and does not need to be created if you only need to precache platform-related OpenShift and OLM Operator images. The PreCachingConfig CR must be applied before referencing it in the ClusterGroupUpgrade CR. Only policies with the ran.openshift.io/ztp-deploy-wave annotation are automatically applied by TALM during cluster installation. Any policy can be remediated by TALM under control of a user created ClusterGroupUpgrade CR. Additional resources Updating managed clusters with the Topology Aware Lifecycle Manager 4.7.4. GitOps Operator and GitOps ZTP New in this release No reference design updates in this release Description GitOps Operator and GitOps ZTP provide a GitOps-based infrastructure for managing cluster deployment and configuration. Cluster definitions and configurations are maintained as a declarative state in Git. You can apply ClusterInstance CRs to the hub cluster where the SiteConfig Operator renders them as installation CRs. In earlier releases, a GitOps ZTP plugin supported the generation of installation CRs from SiteConfig CRs. This plugin is now deprecated. A separate GitOps ZTP plugin is available to enable automatic wrapping of configuration CRs into policies based on the PolicyGenerator or PolicyGenTemplate CR. You can deploy and manage multiple versions of OpenShift Container Platform on managed clusters by using the baseline reference configuration CRs. You can use custom CRs alongside the baseline CRs. To maintain multiple per-version policies simultaneously, use Git to manage the versions of the source and policy CRs by using PolicyGenerator or PolicyGenTemplate CRs. Limits and requirements 300 ClusterInstance CRs per ArgoCD application. Multiple applications can be used to achieve the maximum number of clusters supported by a single hub cluster Content in the source-crs/ directory in Git overrides content provided in the ZTP plugin container, as Git takes precedence in the search path. The source-crs/ directory is specifically expected to be located in the same directory as the kustomization.yaml file, which includes PolicyGenerator or PolicyGenTemplate CRs as a generator. Alternative locations for the source-crs/ directory are not supported in this context. Engineering considerations For multi-node cluster upgrades, you can pause MachineConfigPool ( MCP ) CRs during maintenance windows by setting the paused field to true . You can increase the number of simultaneously updated nodes per MCP CR by configuring the maxUnavailable setting in the MCP CR. The MaxUnavailable field defines the percentage of nodes in the pool that can be simultaneously unavailable during a MachineConfig update. Set maxUnavailable to the maximum tolerable value. This reduces the number of reboots in a cluster during upgrades which results in shorter upgrade times. When you finally unpause the MCP CR, all the changed configurations are applied with a single reboot. During cluster installation, you can pause custom MCP CRs by setting the paused field to true and setting maxUnavailable to 100% to improve installation times. Keep reference CRs and custom CRs under different directories. Doing this allows you to patch and update the reference CRs by simple replacement of all directory contents without touching the custom CRs. When managing multiple versions, the following best practices are recommended: Keep all source CRs and policy creation CRs in Git repositories to ensure consistent generation of policies for each OpenShift Container Platform version based solely on the contents in Git. Keep reference source CRs in a separate directory from custom CRs. This facilitates easy update of reference CRs as required. To avoid confusion or unintentional overwrites when updating content, it is highly recommended to use unique and distinguishable names for custom CRs in the source-crs/ directory and extra manifests in Git. Extra installation manifests are referenced in the ClusterInstance CR through a ConfigMap CR. The ConfigMap CR should be stored alongside the ClusterInstance CR in Git, serving as the single source of truth for the cluster. If needed, you can use a ConfigMap generator to create the ConfigMap CR. Additional resources Preparing the GitOps ZTP site configuration repository for version independence Adding custom content to the GitOps ZTP pipeline 4.7.5. Agent-based Installer New in this release No reference design updates in this release Description The optional Agent-based Installer component provides installation capabilities without centralized infrastructure. The installation program creates an ISO image that you mount to the server. When the server boots it installs OpenShift Container Platform and supplied extra manifests. The Agent-based Installer allows you to install OpenShift Container Platform without a hub cluster. A container image registry is required for cluster installation. Limits and requirements You can supply a limited set of additional manifests at installation time. You must include MachineConfiguration CRs that are required by the RAN DU use case. Engineering considerations The Agent-based Installer provides a baseline OpenShift Container Platform installation. You install Day 2 Operators and the remainder of the RAN DU use case configurations after installation. Additional resources Installing an OpenShift Container Platform cluster with the Agent-based Installer 4.8. Telco RAN DU reference configuration CRs Use the following custom resources (CRs) to configure and deploy OpenShift Container Platform clusters with the telco RAN DU profile. Use the CRs to form the common baseline used in all the specific use models unless otherwise indicated. Note You can extract the complete set of RAN DU CRs from the ztp-site-generate container image. See Preparing the GitOps ZTP site configuration repository for more information. Additional resources Understanding the cluster-compare plugin 4.8.1. Cluster tuning reference CRs Table 4.4. Cluster tuning CRs Component Reference CR Description Optional Cluster capabilities example-sno.yaml Representative SiteConfig CR to install single-node OpenShift with the RAN DU profile No Console disable ConsoleOperatorDisable.yaml Disables the Console Operator. No Disconnected registry 09-openshift-marketplace-ns.yaml Defines a dedicated namespace for managing the OpenShift Operator Marketplace. No Disconnected registry DefaultCatsrc.yaml Configures the catalog source for the disconnected registry. No Disconnected registry DisableOLMPprof.yaml Disables performance profiling for OLM. No Disconnected registry DisconnectedICSP.yaml Configures disconnected registry image content source policy. No Disconnected registry OperatorHub.yaml Optional, for multi-node clusters only. Configures the OperatorHub in OpenShift, disabling all default Operator sources. Not required for single-node OpenShift installs with marketplace capability disabled. No Monitoring configuration ReduceMonitoringFootprint.yaml Reduces the monitoring footprint by disabling Alertmanager and Telemeter, and sets Prometheus retention to 24 hours No Network diagnostics disable DisableSnoNetworkDiag.yaml Configures the cluster network settings to disable built-in network troubleshooting and diagnostic features. No 4.8.2. Day 2 Operators reference CRs Table 4.5. Day 2 Operators CRs Component Reference CR Description Optional Cluster Logging Operator ClusterLogForwarder.yaml Configures log forwarding for the cluster. No Cluster Logging Operator ClusterLogNS.yaml Configures the namespace for cluster logging. No Cluster Logging Operator ClusterLogOperGroup.yaml Configures Operator group for cluster logging. No Cluster Logging Operator ClusterLogServiceAccount.yaml New in 4.18. Configures the cluster logging service account. No Cluster Logging Operator ClusterLogServiceAccountAuditBinding.yaml New in 4.18. Configures the cluster logging service account. No Cluster Logging Operator ClusterLogServiceAccountInfrastructureBinding.yaml New in 4.18. Configures the cluster logging service account. No Cluster Logging Operator ClusterLogSubscription.yaml Manages installation and updates for the Cluster Logging Operator. No Lifecycle Agent ImageBasedUpgrade.yaml Manage the image-based upgrade process in OpenShift. Yes Lifecycle Agent LcaSubscription.yaml Manages installation and updates for the LCA Operator. Yes Lifecycle Agent LcaSubscriptionNS.yaml Configures namespace for LCA subscription. Yes Lifecycle Agent LcaSubscriptionOperGroup.yaml Configures the Operator group for the LCA subscription. Yes Local Storage Operator StorageClass.yaml Defines a storage class with a Delete reclaim policy and no dynamic provisioning in the cluster. No Local Storage Operator StorageLV.yaml Configures local storage devices for the example-storage-class in the openshift-local-storage namespace, specifying device paths and filesystem type. No Local Storage Operator StorageNS.yaml Creates the namespace with annotations for workload management and the deployment wave for the Local Storage Operator. No Local Storage Operator StorageOperGroup.yaml Creates the Operator group for the Local Storage Operator. No Local Storage Operator StorageSubscription.yaml Creates the namespace for the Local Storage Operator with annotations for workload management and deployment wave. No LVM Operator LVMOperatorStatus.yaml Verifies the installation or upgrade of the LVM Storage Operator. Yes LVM Operator StorageLVMCluster.yaml Defines an LVM cluster configuration, with placeholders for storage device classes and volume group settings. Optional substitute for the Local Storage Operator. No LVM Operator StorageLVMSubscription.yaml Manages installation and updates of the LVMS Operator. Optional substitute for the Local Storage Operator. No LVM Operator StorageLVMSubscriptionNS.yaml Creates the namespace for the LVMS Operator with labels and annotations for cluster monitoring and workload management. Optional substitute for the Local Storage Operator. No LVM Operator StorageLVMSubscriptionOperGroup.yaml Defines the target namespace for the LVMS Operator. Optional substitute for the Local Storage Operator. No Node Tuning Operator PerformanceProfile.yaml Configures node performance settings in an OpenShift cluster, optimizing for low latency and real-time workloads. No Node Tuning Operator TunedPerformancePatch.yaml Applies performance tuning settings, including scheduler groups and service configurations for nodes in the specific namespace. No PTP fast event notifications PtpConfigBoundaryForEvent.yaml Configures PTP settings for PTP boundary clocks with additional options for event synchronization. Dependent on cluster role. No PTP fast event notifications PtpConfigForHAForEvent.yaml Configures PTP for highly available boundary clocks with additional PTP fast event settings. Dependent on cluster role. No PTP fast event notifications PtpConfigMasterForEvent.yaml Configures PTP for PTP grandmaster clocks with additional PTP fast event settings. Dependent on cluster role. No PTP fast event notifications PtpConfigSlaveForEvent.yaml Configures PTP for PTP ordinary clocks with additional PTP fast event settings. Dependent on cluster role. No PTP fast event notifications PtpOperatorConfigForEvent.yaml Overrides the default OperatorConfig. Configures the PTP Operator specifying node selection criteria for running PTP daemons in the openshift-ptp namespace. No PTP Operator PtpConfigBoundary.yaml Configures PTP settings for PTP boundary clocks. Dependent on cluster role. No PTP Operator PtpConfigDualCardGmWpc.yaml Configures PTP grandmaster clock settings for hosts that have dual NICs. Dependent on cluster role. No PTP Operator PtpConfigGmWpc.yaml Configures PTP grandmaster clock settings for hosts that have a single NIC. Dependent on cluster role. No PTP Operator PtpConfigSlave.yaml Configures PTP settings for a PTP ordinary clock. Dependent on cluster role. No PTP Operator PtpOperatorConfig.yaml Configures the PTP Operator settings, specifying node selection criteria for running PTP daemons in the openshift-ptp namespace. No PTP Operator PtpSubscription.yaml Manages installation and updates of the PTP Operator in the openshift-ptp namespace. No PTP Operator PtpSubscriptionNS.yaml Configures the namespace for the PTP Operator. No PTP Operator PtpSubscriptionOperGroup.yaml Configures the Operator group for the PTP Operator. No PTP Operator (high availability) PtpConfigBoundary.yaml Configures PTP settings for highly available PTP boundary clocks. No PTP Operator (high availability) PtpConfigForHA.yaml Configures PTP settings for highly available PTP boundary clocks. No SR-IOV FEC Operator AcceleratorsNS.yaml Configures namespace for the VRAN Acceleration Operator. Optional part of application workload. Yes SR-IOV FEC Operator AcceleratorsOperGroup.yaml Configures the Operator group for the VRAN Acceleration Operator. Optional part of application workload. Yes SR-IOV FEC Operator AcceleratorsSubscription.yaml Manages installation and updates for the VRAN Acceleration Operator. Optional part of application workload. Yes SR-IOV FEC Operator SriovFecClusterConfig.yaml Configures SR-IOV FPGA Ethernet Controller (FEC) settings for nodes, specifying drivers, VF amount, and node selection. Yes SR-IOV Operator SriovNetwork.yaml Defines an SR-IOV network configuration, with placeholders for various network settings. No SR-IOV Operator SriovNetworkNodePolicy.yaml Configures SR-IOV network settings for specific nodes, including device type, RDMA support, physical function names, and the number of virtual functions. No SR-IOV Operator SriovOperatorConfig.yaml Configures SR-IOV Network Operator settings, including node selection, injector, and webhook options. No SR-IOV Operator SriovOperatorConfigForSNO.yaml Configures the SR-IOV Network Operator settings for single-node OpenShift, including node selection, injector, webhook options, and disabling node drain, in the openshift-sriov-network-operator namespace. No SR-IOV Operator SriovSubscription.yaml Manages the installation and updates of the SR-IOV Network Operator. No SR-IOV Operator SriovSubscriptionNS.yaml Creates the namespace for the SR-IOV Network Operator with specific annotations for workload management and deployment waves. No SR-IOV Operator SriovSubscriptionOperGroup.yaml Defines the target namespace for the SR-IOV Network Operators, enabling their management and deployment within this namespace. No 4.8.3. Machine configuration reference CRs Table 4.6. Machine configuration CRs Component Reference CR Description Optional Container runtime (crun) enable-crun-master.yaml Configures the container runtime (crun) for control plane nodes. No Container runtime (crun) enable-crun-worker.yaml Configures the container runtime (crun) for worker nodes. No CRI-O wipe disable 99-crio-disable-wipe-master.yaml Disables automatic CRI-O cache wipe following a reboot for on control plane nodes. No CRI-O wipe disable 99-crio-disable-wipe-worker.yaml Disables automatic CRI-O cache wipe following a reboot for on worker nodes. No Kdump enable 06-kdump-master.yaml Configures kdump crash reporting on master nodes. No Kdump enable 06-kdump-worker.yaml Configures kdump crash reporting on worker nodes. No Kubelet configuration and container mount hiding 01-container-mount-ns-and-kubelet-conf-master.yaml Configures a mount namespace for sharing container-specific mounts between kubelet and CRI-O on control plane nodes. No Kubelet configuration and container mount hiding 01-container-mount-ns-and-kubelet-conf-worker.yaml Configures a mount namespace for sharing container-specific mounts between kubelet and CRI-O on worker nodes. No One-shot time sync 99-sync-time-once-master.yaml Synchronizes time once on master nodes. No One-shot time sync 99-sync-time-once-worker.yaml Synchronizes time once on worker nodes. No SCTP 03-sctp-machine-config-master.yaml Loads the SCTP kernel module on master nodes. Yes SCTP 03-sctp-machine-config-worker.yaml Loads the SCTP kernel module on worker nodes. Yes Set RCU normal 08-set-rcu-normal-master.yaml Disables rcu_expedited by setting rcu_normal after the control plane node has booted. No Set RCU normal 08-set-rcu-normal-worker.yaml Disables rcu_expedited by setting rcu_normal after the worker node has booted. No SRIOV-related kernel arguments 07-sriov-related-kernel-args-master.yaml Enables SR-IOV support on master nodes. No 4.9. Comparing a cluster with the telco RAN DU reference configuration After you deploy a telco RAN DU cluster, you can use the cluster-compare plugin to assess the cluster's compliance with the telco RAN DU reference design specifications (RDS). The cluster-compare plugin is an OpenShift CLI ( oc ) plugin. The plugin uses a telco RAN DU reference configuration to validate the cluster with the telco RAN DU custom resources (CRs). The plugin-specific reference configuration for telco RAN DU is packaged in a container image with the telco RAN DU CRs. For further information about the cluster-compare plugin, see "Understanding the cluster-compare plugin". Prerequisites You have access to the cluster as a user with the cluster-admin role. You have credentials to access the registry.redhat.io container image registry. You installed the cluster-compare plugin. Procedure Login to the container image registry with your credentials by running the following command: USD podman login registry.redhat.io Extract the content from the ztp-site-generate-rhel8 container image by running the following commands:: USD podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 USD mkdir -p ./out USD podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 extract /home/ztp --tar | tar x -C ./out Compare the configuration for your cluster to the reference configuration by running the following command: USD oc cluster-compare -r out/reference/metadata.yaml Example output ... ********************************** Cluster CR: config.openshift.io/v1_OperatorHub_cluster 1 Reference File: required/other/operator-hub.yaml 2 Diff Output: diff -u -N /tmp/MERGED-2801470219/config-openshift-io-v1_operatorhub_cluster /tmp/LIVE-2569768241/config-openshift-io-v1_operatorhub_cluster --- /tmp/MERGED-2801470219/config-openshift-io-v1_operatorhub_cluster 2024-12-12 14:13:22.898756462 +0000 +++ /tmp/LIVE-2569768241/config-openshift-io-v1_operatorhub_cluster 2024-12-12 14:13:22.898756462 +0000 @@ -1,6 +1,6 @@ apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: + annotations: 3 + include.release.openshift.io/hypershift: "true" name: cluster -spec: - disableAllDefaultSources: true ********************************** Summary 4 CRs with diffs: 11/12 5 CRs in reference missing from the cluster: 40 6 optional-image-registry: image-registry: Missing CRs: 7 - optional/image-registry/ImageRegistryPV.yaml optional-ptp-config: ptp-config: One of the following is required: - optional/ptp-config/PtpConfigBoundary.yaml - optional/ptp-config/PtpConfigGmWpc.yaml - optional/ptp-config/PtpConfigDualCardGmWpc.yaml - optional/ptp-config/PtpConfigForHA.yaml - optional/ptp-config/PtpConfigMaster.yaml - optional/ptp-config/PtpConfigSlave.yaml - optional/ptp-config/PtpConfigSlaveForEvent.yaml - optional/ptp-config/PtpConfigForHAForEvent.yaml - optional/ptp-config/PtpConfigMasterForEvent.yaml - optional/ptp-config/PtpConfigBoundaryForEvent.yaml ptp-operator-config: One of the following is required: - optional/ptp-config/PtpOperatorConfig.yaml - optional/ptp-config/PtpOperatorConfigForEvent.yaml optional-storage: storage: Missing CRs: - optional/local-storage-operator/StorageLV.yaml ... No CRs are unmatched to reference CRs 8 Metadata Hash: 09650c31212be9a44b99315ec14d2e7715ee194a5d68fb6d24f65fd5ddbe3c3c 9 No patched CRs 10 1 1 The CR under comparison. The plugin displays each CR with a difference from the corresponding template. 2 The template matching with the CR for comparison. 3 The output in Linux diff format shows the difference between the template and the cluster CR. 4 After the plugin reports the line diffs for each CR, the summary of differences are reported. 5 The number of CRs in the comparison with differences from the corresponding templates. 6 The number of CRs represented in the reference configuration, but missing from the live cluster. 7 The list of CRs represented in the reference configuration, but missing from the live cluster. 8 The CRs that did not match to a corresponding template in the reference configuration. 9 The metadata hash identifies the reference configuration. 10 The list of patched CRs. 4.10. Telco RAN DU 4.18 validated software components The Red Hat telco RAN DU 4.18 solution has been validated using the following Red Hat software products for OpenShift Container Platform managed clusters. Table 4.7. Telco RAN DU managed cluster validated software components Component Software version Managed cluster version 4.18 Cluster Logging Operator 6.1 1 Local Storage Operator 4.18 OpenShift API for Data Protection (OADP) 1.4 PTP Operator 4.18 SR-IOV Operator 4.18 SRIOV-FEC Operator 2.10 Lifecycle Agent 4.18 [1] This table will be updated when the aligned Cluster Logging Operator version 6.2 is released. 4.11. Telco RAN DU 4.18 hub cluster validated software components The Red Hat telco RAN 4.18 solution has been validated using the following Red Hat software products for OpenShift Container Platform hub clusters. Table 4.8. Telco hub cluster validated software components Component Software version Hub cluster version 4.18 Red Hat Advanced Cluster Management (RHACM) 2.12 1 Red Hat OpenShift GitOps 1.14 GitOps ZTP site generate plugins 4.18 Topology Aware Lifecycle Manager (TALM) 4.18 [1] This table will be updated when the aligned RHACM version 2.13 is released. | [
"query=avg_over_time(pod:container_cpu_usage:sum{namespace=\"openshift-kube-apiserver\"}[30m])",
"ip link set dev <physical_function> mtu 9000",
"podman login registry.redhat.io",
"podman pull registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18",
"mkdir -p ./out",
"podman run --log-driver=none --rm registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.18 extract /home/ztp --tar | tar x -C ./out",
"oc cluster-compare -r out/reference/metadata.yaml",
"********************************** Cluster CR: config.openshift.io/v1_OperatorHub_cluster 1 Reference File: required/other/operator-hub.yaml 2 Diff Output: diff -u -N /tmp/MERGED-2801470219/config-openshift-io-v1_operatorhub_cluster /tmp/LIVE-2569768241/config-openshift-io-v1_operatorhub_cluster --- /tmp/MERGED-2801470219/config-openshift-io-v1_operatorhub_cluster 2024-12-12 14:13:22.898756462 +0000 +++ /tmp/LIVE-2569768241/config-openshift-io-v1_operatorhub_cluster 2024-12-12 14:13:22.898756462 +0000 @@ -1,6 +1,6 @@ apiVersion: config.openshift.io/v1 kind: OperatorHub metadata: + annotations: 3 + include.release.openshift.io/hypershift: \"true\" name: cluster -spec: - disableAllDefaultSources: true ********************************** Summary 4 CRs with diffs: 11/12 5 CRs in reference missing from the cluster: 40 6 optional-image-registry: image-registry: Missing CRs: 7 - optional/image-registry/ImageRegistryPV.yaml optional-ptp-config: ptp-config: One of the following is required: - optional/ptp-config/PtpConfigBoundary.yaml - optional/ptp-config/PtpConfigGmWpc.yaml - optional/ptp-config/PtpConfigDualCardGmWpc.yaml - optional/ptp-config/PtpConfigForHA.yaml - optional/ptp-config/PtpConfigMaster.yaml - optional/ptp-config/PtpConfigSlave.yaml - optional/ptp-config/PtpConfigSlaveForEvent.yaml - optional/ptp-config/PtpConfigForHAForEvent.yaml - optional/ptp-config/PtpConfigMasterForEvent.yaml - optional/ptp-config/PtpConfigBoundaryForEvent.yaml ptp-operator-config: One of the following is required: - optional/ptp-config/PtpOperatorConfig.yaml - optional/ptp-config/PtpOperatorConfigForEvent.yaml optional-storage: storage: Missing CRs: - optional/local-storage-operator/StorageLV.yaml No CRs are unmatched to reference CRs 8 Metadata Hash: 09650c31212be9a44b99315ec14d2e7715ee194a5d68fb6d24f65fd5ddbe3c3c 9 No patched CRs 10"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/telco-ran-du-ref-design-specs |
19.8. Expert Options | 19.8. Expert Options This section provides information about expert options. KVM Virtualization -enable-kvm QEMU-KVM supports only KVM virtualization and it is used by default if available. If -enable-kvm is used and KVM is not available, qemu-kvm fails. However, if -enable-kvm is not used and KVM is not available, qemu-kvm runs in TCG mode, which is not supported. Disable Kernel Mode PIT Reinjection -no-kvm-pit-reinjection No Shutdown -no-shutdown No Reboot -no-reboot Serial Port, Monitor, QMP -serial <dev> -monitor <dev> -qmp <dev> Supported devices are: stdio - standard input/output null - null device file :<filename> - output to file. tcp :[<host>]:<port>[,server][,nowait][,nodelay] - TCP Net console. unix :<path>[,server][,nowait] - Unix domain socket. mon :<dev_string> - Any device above, used to multiplex monitor too. none - disable, valid only for -serial. chardev :<id> - character device created with -chardev. Monitor Redirect -mon <chardev_id>[,mode=[readline|control]][,default=[on|off]] Manual CPU Start -S RTC -rtc [base=utc|localtime|date][,clock=host|vm][,driftfix=none|slew] Watchdog -watchdog model Watchdog Reaction -watchdog-action <action> Guest Memory Backing -mem-prealloc -mem-path /dev/hugepages SMBIOS Entry -smbios type=0[,vendor=<str>][,<version=str>][,date=<str>][,release=%d.%d] -smbios type=1[,manufacturer=<str>][,product=<str>][,version=<str>][,serial=<str>][,uuid=<uuid>][,sku=<str>][,family=<str>] | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sec-qemu_kvm_whitelist_expert_options |
Chapter 5. Changing the update approval strategy | Chapter 5. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/updating_openshift_data_foundation/changing-the-update-approval-strategy_rhodf |
Chapter 7. Troubleshooting disaster recovery | Chapter 7. Troubleshooting disaster recovery 7.1. Troubleshooting Metro-DR 7.1.1. A statefulset application stuck after failover Problem While relocating to a preferred cluster, DRPlacementControl is stuck reporting PROGRESSION as "MovingToSecondary". Previously, before Kubernetes v1.23, the Kubernetes control plane never cleaned up the PVCs created for StatefulSets. This activity was left to the cluster administrator or a software operator managing the StatefulSets. Due to this, the PVCs of the StatefulSets were left untouched when their Pods were deleted. This prevents Ramen from relocating an application to its preferred cluster. Resolution If the workload uses StatefulSets, and relocation is stuck with PROGRESSION as "MovingToSecondary", then run: For each bounded PVC for that namespace that belongs to the StatefulSet, run Once all PVCs are deleted, Volume Replication Group (VRG) transitions to secondary, and then gets deleted. Run the following command After a few seconds to a few minutes, the PROGRESSION reports "Completed" and relocation is complete. Result The workload is relocated to the preferred cluster BZ reference: [ 2118270 ] 7.1.2. DR policies protect all applications in the same namespace Problem While only a single application is selected to be used by a DR policy, all applications in the same namespace will be protected. This results in PVCs, that match the DRPlacementControl spec.pvcSelector across multiple workloads or if the selector is missing across all workloads, replication management to potentially manage each PVC multiple times and cause data corruption or invalid operations based on individual DRPlacementControl actions. Resolution Label PVCs that belong to a workload uniquely, and use the selected label as the DRPlacementControl spec.pvcSelector to disambiguate which DRPlacementControl protects and manages which subset of PVCs within a namespace. It is not possible to specify the spec.pvcSelector field for the DRPlacementControl using the user interface, hence the DRPlacementControl for such applications must be deleted and created using the command line. BZ reference: [ 2128860 ] 7.1.3. During failback of an application stuck in Relocating state Problem This issue might occur after performing failover and failback of an application (all nodes or clusters are up). When performing failback, application is stuck in the Relocating state with a message of Waiting for PV restore to complete. Resolution Use S3 client or equivalent to clean up the duplicate PV objects from the s3 store. Keep only the one that has a timestamp closer to the failover or relocate time. BZ reference: [ 2120201 ] 7.1.4. Relocate or failback might be stuck in Initiating state Problem When a primary cluster is down and comes back online while the secondary goes down, relocate or failback might be stuck in the Initiating state. Resolution To avoid this situation, cut off all access from the old active hub to the managed clusters. Alternatively, you can scale down the ApplicationSet controller on the old active hub cluster either before moving workloads or when they are in the clean-up phase. On the old active hub, scale down the two deployments using the following commands: BZ reference: [ 2243804 ] 7.2. Troubleshooting Regional-DR 7.2.1. rbd-mirror daemon health is in warning state Problem There appears to be numerous cases where WARNING gets reported if mirror service ::get_mirror_service_status calls Ceph monitor to get service status for rbd-mirror . Following a network disconnection, rbd-mirror daemon health is in the warning state while the connectivity between both the managed clusters is fine. Resolution Run the following command in the toolbox and look for leader:false If you see the following in the output: leader: false It indicates that there is a daemon startup issue and the most likely root cause could be due to problems reliably connecting to the secondary cluster. Workaround: Move the rbd-mirror pod to a different node by simply deleting the pod and verify that it has been rescheduled on another node. leader: true or no output Contact Red Hat Support . BZ reference: [ 2118627 ] 7.2.2. volsync-rsync-src pod is in error state as it is unable to resolve the destination hostname Problem VolSync source pod is unable to resolve the hostname of the VolSync destination pod. The log of the VolSync Pod consistently shows an error message over an extended period of time similar to the following log snippet. Example output Resolution Restart submariner-lighthouse-agent on both nodes. 7.2.3. Cleanup and data sync for ApplicationSet workloads remain stuck after older primary managed cluster is recovered post failover Problem ApplicationSet based workload deployments to managed clusters are not garbage collected in cases when the hub cluster fails. It is recovered to a standby hub cluster, while the workload has been failed over to a surviving managed cluster. The cluster that the workload was failed over from, rejoins the new recovered standby hub. ApplicationSets that are DR protected, with a regional DRPolicy, hence starts firing the VolumeSynchronizationDelay alert. Further such DR protected workloads cannot be failed over to the peer cluster or relocated to the peer cluster as data is out of sync between the two clusters. Resolution The workaround requires that openshift-gitops operators can own the workload resources that are orphaned on the managed cluster that rejoined the hub post a failover of the workload was performed from the new recovered hub. To achieve this the following steps can be taken: Determine the Placement that is in use by the ArgoCD ApplicationSet resource on the hub cluster in the openshift-gitops namespace. Inspect the placement label value for the ApplicationSet in this field: spec.generators.clusterDecisionResource.labelSelector.matchLabels This would be the name of the Placement resource <placement-name> Ensure that there exists a PlacemenDecision for the ApplicationSet referenced Placement . This results in a single PlacementDecision that places the workload in the currently desired failover cluster. Create a new PlacementDecision for the ApplicationSet pointing to the cluster where it should be cleaned up. For example: Update the newly created PlacementDecision with a status subresource . Watch and ensure that the Application resource for the ApplicationSet has been placed on the desired cluster In the output, check if the SYNC STATUS shows as Synced and the HEALTH STATUS shows as Healthy . Delete the PlacementDecision that was created in step (3), such that ArgoCD can garbage collect the workload resources on the <managedcluster-name-to-clean-up> ApplicationSets that are DR protected, with a regional DRPolicy, stops firing the VolumeSynchronizationDelay alert. BZ reference: [ 2268594 ] 7.3. Troubleshooting 2-site stretch cluster with Arbiter 7.3.1. Recovering workload pods stuck in ContainerCreating state post zone recovery Problem After performing complete zone failure and recovery, the workload pods are sometimes stuck in ContainerCreating state with the any of the below errors: MountDevice failed to create newCsiDriverClient: driver name openshift-storage.rbd.csi.ceph.com not found in the list of registered CSI drivers MountDevice failed for volume <volume_name> : rpc error: code = Aborted desc = an operation with the given Volume ID <volume_id> already exists MountVolume.SetUp failed for volume <volume_name> : rpc error: code = Internal desc = staging path <path> for volume <volume_id> is not a mountpoint Resolution If the workload pods are stuck with any of the above mentioned errors, perform the following workarounds: For ceph-fs workload stuck in ContainerCreating : Restart the nodes where the stuck pods are scheduled Delete these stuck pods Verify that the new pods are running For ceph-rbd workload stuck in ContainerCreating that do not self recover after sometime Restart csi-rbd plugin pods in the nodes where the stuck pods are scheduled Verify that the new pods are running | [
"oc get pvc -n <namespace>",
"oc delete pvc <pvcname> -n namespace",
"oc get drpc -n <namespace> -o wide",
"oc scale deploy -n openshift-gitops-operator openshift-gitops-operator-controller-manager --replicas=0 oc scale statefulset -n openshift-gitops openshift-gitops-application-controller --replicas=0",
"rbd mirror pool status --verbose ocs-storagecluster-cephblockpool | grep 'leader:'",
"oc logs -n busybox-workloads-3-2 volsync-rsync-src-dd-io-pvc-1-p25rz",
"VolSync rsync container version: ACM-0.6.0-ce9a280 Syncing data to volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local:22 ssh: Could not resolve hostname volsync-rsync-dst-dd-io-pvc-1.busybox-workloads-3-2.svc.clusterset.local: Name or service not known",
"oc delete pod -l app=submariner-lighthouse-agent -n submariner-operator",
"oc get placementdecision -n openshift-gitops --selector cluster.open-cluster-management.io/placement=<placement-name>",
"apiVersion: cluster.open-cluster-management.io/v1beta1 kind: PlacementDecision metadata: labels: cluster.open-cluster-management.io/decision-group-index: \"1\" # Typically one higher than the same value in the esisting PlacementDecision determined at step (2) cluster.open-cluster-management.io/decision-group-name: \"\" cluster.open-cluster-management.io/placement: cephfs-appset-busybox10-placement name: <placemen-name>-decision-<n> # <n> should be one higher than the existing PlacementDecision as determined in step (2) namespace: openshift-gitops",
"decision-status.yaml: status: decisions: - clusterName: <managedcluster-name-to-clean-up> # This would be the cluster from where the workload was failed over, NOT the current workload cluster reason: FailoverCleanup",
"oc patch placementdecision -n openshift-gitops <placemen-name>-decision-<n> --patch-file=decision-status.yaml --subresource=status --type=merge",
"oc get application -n openshift-gitops <applicationset-name>-<managedcluster-name-to-clean-up>",
"oc delete placementdecision -n openshift-gitops <placemen-name>-decision-<n>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/troubleshooting_disaster_recovery |
Chapter 2. The core Ceph components | Chapter 2. The core Ceph components A Red Hat Ceph Storage cluster can have a large number of Ceph nodes for limitless scalability, high availability and performance. Each node leverages non-proprietary hardware and intelligent Ceph daemons that communicate with each other to: Write and read data Compress data Ensure durability by replicating or erasure coding data Monitor and report on cluster health- also called 'heartbeating' Redistribute data dynamically- also called 'backfilling' Ensure data integrity; and, Recover from failures. To the Ceph client interface that reads and writes data, a Red Hat Ceph Storage cluster looks like a simple pool where it stores data. However, librados and the storage cluster perform many complex operations in a manner that is completely transparent to the client interface. Ceph clients and Ceph OSDs both use the CRUSH (Controlled Replication Under Scalable Hashing) algorithm. The following sections provide details on how CRUSH enables Ceph to perform these operations seamlessly. Prerequisites A basic understanding of distributed storage systems. 2.1. Ceph pools The Ceph storage cluster stores data objects in logical partitions called 'Pools'. Ceph administrators can create pools for particular types of data, such as for block devices, object gateways, or simply just to separate one group of users from another. From the perspective of a Ceph client, the storage cluster is very simple. When a Ceph client reads or writes data using an I/O context, it always connects to a storage pool in the Ceph storage cluster. The client specifies the pool name, a user and a secret key, so the pool appears to act as a logical partition with access controls to its data objects. In actual fact, a Ceph pool is not only a logical partition for storing object data. A pool plays a critical role in how the Ceph storage cluster distributes and stores data. However, these complex operations are completely transparent to the Ceph client. Ceph pools define: Pool Type: In early versions of Ceph, a pool simply maintained multiple deep copies of an object. Today, Ceph can maintain multiple copies of an object, or it can use erasure coding to ensure durability. The data durability method is pool-wide, and does not change after creating the pool. The pool type defines the data durability method when creating the pool. Pool types are completely transparent to the client. Placement Groups: In an exabyte scale storage cluster, a Ceph pool might store millions of data objects or more. Ceph must handle many types of operations, including data durability via replicas or erasure code chunks, data integrity by scrubbing or CRC checks, replication, rebalancing and recovery. Consequently, managing data on a per-object basis presents a scalability and performance bottleneck. Ceph addresses this bottleneck by sharding a pool into placement groups. The CRUSH algorithm computes the placement group for storing an object and computes the Acting Set of OSDs for the placement group. CRUSH puts each object into a placement group. Then, CRUSH stores each placement group in a set of OSDs. System administrators set the placement group count when creating or modifying a pool. CRUSH Ruleset: CRUSH plays another important role: CRUSH can detect failure domains and performance domains. CRUSH can identify OSDs by storage media type and organize OSDs hierarchically into nodes, racks, and rows. CRUSH enables Ceph OSDs to store object copies across failure domains. For example, copies of an object may get stored in different server rooms, aisles, racks and nodes. If a large part of a cluster fails, such as a rack, the cluster can still operate in a degraded state until the cluster recovers. Additionally, CRUSH enables clients to write data to particular types of hardware, such as SSDs, hard drives with SSD journals, or hard drives with journals on the same drive as the data. The CRUSH ruleset determines failure domains and performance domains for the pool. Administrators set the CRUSH ruleset when creating a pool. Note An administrator CANNOT change a pool's ruleset after creating the pool. Durability : In exabyte scale storage clusters, hardware failure is an expectation and not an exception. When using data objects to represent larger-grained storage interfaces such as a block device, losing one or more data objects for that larger-grained interface can compromise the integrity of the larger-grained storage entity- potentially rendering it useless. So data loss is intolerable. Ceph provides high data durability in two ways: Replica pools store multiple deep copies of an object using the CRUSH failure domain to physically separate one data object copy from another. That is, copies get distributed to separate physical hardware. This increases durability during hardware failures. Erasure coded pools store each object as K+M chunks, where K represents data chunks and M represents coding chunks. The sum represents the number of OSDs used to store the object and the M value represents the number of OSDs that can fail and still restore data should the M number of OSDs fail. From the client perspective, Ceph is elegant and simple. The client simply reads from and writes to pools. However, pools play an important role in data durability, performance and high availability. 2.2. Ceph authentication To identify users and protect against man-in-the-middle attacks, Ceph provides its cephx authentication system, which authenticates users and daemons. Note The cephx protocol does not address data encryption for data transported over the network or data stored in OSDs. Cephx uses shared secret keys for authentication, meaning both the client and the monitor cluster have a copy of the client's secret key. The authentication protocol enables both parties to prove to each other that they have a copy of the key without actually revealing it. This provides mutual authentication, which means the cluster is sure the user possesses the secret key, and the user is sure that the cluster has a copy of the secret key. Cephx The cephx authentication protocol operates in a manner similar to Kerberos. A user/actor invokes a Ceph client to contact a monitor. Unlike Kerberos, each monitor can authenticate users and distribute keys, so there is no single point of failure or bottleneck when using cephx . The monitor returns an authentication data structure similar to a Kerberos ticket that contains a session key for use in obtaining Ceph services. This session key is itself encrypted with the user's permanent secret key, so that only the user can request services from the Ceph monitors. The client then uses the session key to request its desired services from the monitor, and the monitor provides the client with a ticket that will authenticate the client to the OSDs that actually handle data. Ceph monitors and OSDs share a secret, so the client can use the ticket provided by the monitor with any OSD or metadata server in the cluster. Like Kerberos, cephx tickets expire, so an attacker cannot use an expired ticket or session key obtained surreptitiously. This form of authentication will prevent attackers with access to the communications medium from either creating bogus messages under another user's identity or altering another user's legitimate messages, as long as the user's secret key is not divulged before it expires. To use cephx , an administrator must set up users first. In the following diagram, the client.admin user invokes ceph auth get-or-create-key from the command line to generate a username and secret key. Ceph's auth subsystem generates the username and key, stores a copy with the monitor(s) and transmits the user's secret back to the client.admin user. This means that the client and the monitor share a secret key. Note The client.admin user must provide the user ID and secret key to the user in a secure manner. 2.3. Ceph placement groups Storing millions of objects in a cluster and managing them individually is resource intensive. So Ceph uses placement groups (PGs) to make managing a huge number of objects more efficient. A PG is a subset of a pool that serves to contain a collection of objects. Ceph shards a pool into a series of PGs. Then, the CRUSH algorithm takes the cluster map and the status of the cluster into account and distributes the PGs evenly and pseudo-randomly to OSDs in the cluster. Here is how it works. When a system administrator creates a pool, CRUSH creates a user-defined number of PGs for the pool. Generally, the number of PGs should be a reasonably fine-grained subset of the data. For example, 100 PGs per OSD per pool would mean that each PG contains approximately 1% of the pool's data. The number of PGs has a performance impact when Ceph needs to move a PG from one OSD to another OSD. If the pool has too few PGs, Ceph will move a large percentage of the data simultaneously and the network load will adversely impact the cluster's performance. If the pool has too many PGs, Ceph will use too much CPU and RAM when moving tiny percentages of the data and thereby adversely impact the cluster's performance. For details on calculating the number of PGs to achieve optimal performance, see Placement group count . Ceph ensures against data loss by storing replicas of an object or by storing erasure code chunks of an object. Since Ceph stores objects or erasure code chunks of an object within PGs, Ceph replicates each PG in a set of OSDs called the "Acting Set" for each copy of an object or each erasure code chunk of an object. A system administrator can determine the number of PGs in a pool and the number of replicas or erasure code chunks. However, the CRUSH algorithm calculates which OSDs are in the acting set for a particular PG. The CRUSH algorithm and PGs make Ceph dynamic. Changes in the cluster map or the cluster state may result in Ceph moving PGs from one OSD to another automatically. Here are a few examples: Expanding the Cluster: When adding a new host and its OSDs to the cluster, the cluster map changes. Since CRUSH evenly and pseudo-randomly distributes PGs to OSDs throughout the cluster, adding a new host and its OSDs means that CRUSH will reassign some of the pool's placement groups to those new OSDs. That means that system administrators do not have to rebalance the cluster manually. Also, it means that the new OSDs contain approximately the same amount of data as the other OSDs. This also means that new OSDs do not contain newly written OSDs, preventing "hot spots" in the cluster. An OSD Fails: When an OSD fails, the state of the cluster changes. Ceph temporarily loses one of the replicas or erasure code chunks, and needs to make another copy. If the primary OSD in the acting set fails, the OSD in the acting set becomes the primary and CRUSH calculates a new OSD to store the additional copy or erasure code chunk. By managing millions of objects within the context of hundreds to thousands of PGs, the Ceph storage cluster can grow, shrink and recover from failure efficiently. For Ceph clients, the CRUSH algorithm via librados makes the process of reading and writing objects very simple. A Ceph client simply writes an object to a pool or reads an object from a pool. The primary OSD in the acting set can write replicas of the object or erasure code chunks of the object to the secondary OSDs in the acting set on behalf of the Ceph client. If the cluster map or cluster state changes, the CRUSH computation for which OSDs store the PG will change too. For example, a Ceph client may write object foo to the pool bar . CRUSH will assign the object to PG 1.a , and store it on OSD 5 , which makes replicas on OSD 10 and OSD 15 respectively. If OSD 5 fails, the cluster state changes. When the Ceph client reads object foo from pool bar , the client via librados will automatically retrieve it from OSD 10 as the new primary OSD dynamically. The Ceph client via librados connects directly to the primary OSD within an acting set when writing and reading objects. Since I/O operations do not use a centralized broker, network oversubscription is typically NOT an issue with Ceph. The following diagram depicts how CRUSH assigns objects to PGs, and PGs to OSDs. The CRUSH algorithm assigns the PGs to OSDs such that each OSD in the acting set is in a separate failure domain, which typically means the OSDs will always be on separate server hosts and sometimes in separate racks. 2.4. Ceph CRUSH ruleset Ceph assigns a CRUSH ruleset to a pool. When a Ceph client stores or retrieves data in a pool, Ceph identifies the CRUSH ruleset, a rule within the rule set, and the top-level bucket in the rule for storing and retrieving data. As Ceph processes the CRUSH rule, it identifies the primary OSD that contains the placement group for an object. That enables the client to connect directly to the OSD, access the placement group and read or write object data. To map placement groups to OSDs, a CRUSH map defines a hierarchical list of bucket types. The list of bucket types are located under types in the generated CRUSH map. The purpose of creating a bucket hierarchy is to segregate the leaf nodes by their failure domains and/or performance domains, such as drive type, hosts, chassis, racks, power distribution units, pods, rows, rooms, and data centers. With the exception of the leaf nodes representing OSDs, the rest of the hierarchy is arbitrary. Administrators may define it according to their own needs if the default types don't suit their requirements. CRUSH supports a directed acyclic graph that models the Ceph OSD nodes, typically in a hierarchy. So Ceph administrators can support multiple hierarchies with multiple root nodes in a single CRUSH map. For example, an administrator can create a hierarchy representing higher cost SSDs for high performance, and a separate hierarchy of lower cost hard drives with SSD journals for moderate performance. 2.5. Ceph input/output operations Ceph clients retrieve a 'Cluster Map' from a Ceph monitor, bind to a pool, and perform input/output (I/O) on objects within placement groups in the pool. The pool's CRUSH ruleset and the number of placement groups are the main factors that determine how Ceph will place the data. With the latest version of the cluster map, the client knows about all of the monitors and OSDs in the cluster and their current state. However, the client doesn't know anything about object locations. The only inputs required by the client are the object ID and the pool name. It is simple: Ceph stores data in named pools. When a client wants to store a named object in a pool it takes the object name, a hash code, the number of PGs in the pool and the pool name as inputs; then, CRUSH (Controlled Replication Under Scalable Hashing) calculates the ID of the placement group and the primary OSD for the placement group. Ceph clients use the following steps to compute PG IDs. The client inputs the pool ID and the object ID. For example, pool = liverpool and object-id = john . CRUSH takes the object ID and hashes it. CRUSH calculates the hash modulo of the number of PGs to get a PG ID. For example, 58 . CRUSH calculates the primary OSD corresponding to the PG ID. The client gets the pool ID given the pool name. For example, the pool liverpool is pool number 4 . The client prepends the pool ID to the PG ID. For example, 4.58 . The client performs an object operation such as write, read, or delete by communicating directly with the Primary OSD in the Acting Set. The topology and state of the Ceph storage cluster are relatively stable during a session. Empowering a Ceph client via librados to compute object locations is much faster than requiring the client to make a query to the storage cluster over a chatty session for each read/write operation. The CRUSH algorithm allows a client to compute where objects should be stored, and enables the client to contact the primary OSD in the acting set directly to store or retrieve data in the objects. Since a cluster at the exabyte scale has thousands of OSDs, network oversubscription between a client and a Ceph OSD is not a significant problem. If the cluster state changes, the client can simply request an update to the cluster map from the Ceph monitor. 2.6. Ceph replication Like Ceph clients, Ceph OSDs can contact Ceph monitors to retrieve the latest copy of the cluster map. Ceph OSDs also use the CRUSH algorithm, but they use it to compute where to store replicas of objects. In a typical write scenario, a Ceph client uses the CRUSH algorithm to compute the placement group ID and the primary OSD in the Acting Set for an object. When the client writes the object to the primary OSD, the primary OSD finds the number of replicas that it should store. The value is found in the osd_pool_default_size setting. Then, the primary OSD takes the object ID, pool name and the cluster map and uses the CRUSH algorithm to calculate the IDs of secondary OSDs for the acting set. The primary OSD writes the object to the secondary OSDs. When the primary OSD receives an acknowledgment from the secondary OSDs and the primary OSD itself completes its write operation, it acknowledges a successful write operation to the Ceph client. With the ability to perform data replication on behalf of Ceph clients, Ceph OSD Daemons relieve Ceph clients from that duty, while ensuring high data availability and data safety. Note The primary OSD and the secondary OSDs are typically configured to be in separate failure domains. CRUSH computes the IDs of the secondary OSDs with consideration for the failure domains. Data copies In a replicated storage pool, Ceph needs multiple copies of an object to operate in a degraded state. Ideally, a Ceph storage cluster enables a client to read and write data even if one of the OSDs in an acting set fails. For this reason, Ceph defaults to making three copies of an object with a minimum of two copies clean for write operations. Ceph will still preserve data even if two OSDs fail. However, it will interrupt write operations. In an erasure-coded pool, Ceph needs to store chunks of an object across multiple OSDs so that it can operate in a degraded state. Similar to replicated pools, ideally an erasure-coded pool enables a Ceph client to read and write in a degraded state. Important Red Hat supports the following jerasure coding values for k , and m : k=8 m=3 k=8 m=4 k=4 m=2 2.7. Ceph erasure coding Ceph can load one of many erasure code algorithms. The earliest and most commonly used is the Reed-Solomon algorithm. An erasure code is actually a forward error correction (FEC) code. FEC code transforms a message of K chunks into a longer message called a 'code word' of N chunks, such that Ceph can recover the original message from a subset of the N chunks. More specifically, N = K+M where the variable K is the original amount of data chunks. The variable M stands for the extra or redundant chunks that the erasure code algorithm adds to provide protection from failures. The variable N is the total number of chunks created after the erasure coding process. The value of M is simply N-K which means that the algorithm computes N-K redundant chunks from K original data chunks. This approach guarantees that Ceph can access all the original data. The system is resilient to arbitrary N-K failures. For instance, in a 10 K of 16 N configuration, or erasure coding 10/16 , the erasure code algorithm adds six extra chunks to the 10 base chunks K . For example, in a M = K-N or 16-10 = 6 configuration, Ceph will spread the 16 chunks N across 16 OSDs. The original file could be reconstructed from the 10 verified N chunks even if 6 OSDs fail- ensuring that the Red Hat Ceph Storage cluster will not lose data, and thereby ensures a very high level of fault tolerance. Like replicated pools, in an erasure-coded pool the primary OSD in the up set receives all write operations. In replicated pools, Ceph makes a deep copy of each object in the placement group on the secondary OSDs in the set. For erasure coding, the process is a bit different. An erasure coded pool stores each object as K+M chunks. It is divided into K data chunks and M coding chunks. The pool is configured to have a size of K+M so that Ceph stores each chunk in an OSD in the acting set. Ceph stores the rank of the chunk as an attribute of the object. The primary OSD is responsible for encoding the payload into K+M chunks and sends them to the other OSDs. The primary OSD is also responsible for maintaining an authoritative version of the placement group logs. For example, in a typical configuration a system administrator creates an erasure coded pool to use six OSDs and sustain the loss of two of them. That is, ( K+M = 6 ) such that ( M = 2 ). When Ceph writes the object NYAN containing ABCDEFGHIJKL to the pool, the erasure encoding algorithm splits the content into four data chunks by simply dividing the content into four parts: ABC , DEF , GHI , and JKL . The algorithm will pad the content if the content length is not a multiple of K . The function also creates two coding chunks: the fifth with YXY and the sixth with QGC . Ceph stores each chunk on an OSD in the acting set, where it stores the chunks in objects that have the same name, NYAN , but reside on different OSDs. The algorithm must preserve the order in which it created the chunks as an attribute of the object shard_t , in addition to its name. For example, Chunk 1 contains ABC and Ceph stores it on OSD5 while chunk 5 contains YXY and Ceph stores it on OSD4 . In a recovery scenario, the client attempts to read the object NYAN from the erasure-coded pool by reading chunks 1 through 6. The OSD informs the algorithm that chunks 2 and 6 are missing. These missing chunks are called 'erasures'. For example, the primary OSD could not read chunk 6 because the OSD6 is out, and could not read chunk 2, because OSD2 was the slowest and its chunk was not taken into account. However, as soon as the algorithm has four chunks, it reads the four chunks: chunk 1 containing ABC , chunk 3 containing GHI , chunk 4 containing JKL , and chunk 5 containing YXY . Then, it rebuilds the original content of the object ABCDEFGHIJKL , and original content of chunk 6, which contained QGC . Splitting data into chunks is independent from object placement. The CRUSH ruleset along with the erasure-coded pool profile determines the placement of chunks on the OSDs. For instance, using the Locally Repairable Code ( lrc ) plugin in the erasure code profile creates additional chunks and requires fewer OSDs to recover from. For example, in an lrc profile configuration K=4 M=2 L=3 , the algorithm creates six chunks ( K+M ), just as the jerasure plugin would, but the locality value ( L=3 ) requires that the algorithm create 2 more chunks locally. The algorithm creates the additional chunks as such, (K+M)/L . If the OSD containing chunk 0 fails, this chunk can be recovered by using chunks 1, 2 and the first local chunk. In this case, the algorithm only requires 3 chunks for recovery instead of 5. Note Using erasure-coded pools disables Object Map. Important For an erasure-coded pool with 2+2 configuration, replace the input string from ABCDEFGHIJKL to ABCDEF and replace the coding chunks from 4 to 2 . Additional Resources For more information about CRUSH, the erasure-coding profiles, and plugins, see the Storage Strategies Guide for Red Hat Ceph Storage 7. For more details on Object Map, see the Ceph client object map section. 2.8. Ceph ObjectStore ObjectStore provides a low-level interface to an OSD's raw block device. When a client reads or writes data, it interacts with the ObjectStore interface. Ceph write operations are essentially ACID transactions: that is, they provide Atomicity , Consistency , Isolation and Durability . ObjectStore ensures that a Transaction is all-or-nothing to provide Atomicity . The ObjectStore also handles object semantics. An object stored in the storage cluster has a unique identifier, object data and metadata. So ObjectStore provides Consistency by ensuring that Ceph object semantics are correct. ObjectStore also provides the Isolation portion of an ACID transaction by invoking a Sequencer on write operations to ensure that Ceph write operations occur sequentially. In contrast, an OSDs replication or erasure coding functionality provides the Durability component of the ACID transaction. Since ObjectStore is a low-level interface to storage media, it also provides performance statistics. Ceph implements several concrete methods for storing data: BlueStore: A production grade implementation using a raw block device to store object data. Memstore: A developer implementation for testing read/write operations directly in RAM. K/V Store: An internal implementation for Ceph's use of key/value databases. Since administrators will generally only address BlueStore , the following sections will only describe those implementations in greater detail. 2.9. Ceph BlueStore BlueStore is the current storage implementation for Ceph. It uses the very light weight BlueFS file system on a small partition for its k/v databases and eliminates the paradigm of a directory representing a placement group, a file representing an object and file XATTRs representing metadata. BlueStore stores data as: Object Data: In BlueStore , Ceph stores objects as blocks directly on a raw block device. The portion of the raw block device that stores object data does NOT contain a filesystem. The omission of the filesystem eliminates a layer of indirection and thereby improves performance. However, much of the BlueStore performance improvement comes from the block database and write-ahead log. Block Database: In BlueStore , the block database handles the object semantics to guarantee Consistency . An object's unique identifier is a key in the block database. The values in the block database consist of a series of block addresses that refer to the stored object data, the object's placement group, and object metadata. The block database may reside on a BlueFS partition on the same raw block device that stores the object data, or it may reside on a separate block device, usually when the primary block device is a hard disk drive and an SSD or NVMe will improve performance. The key/value semantics of BlueStore do not suffer from the limitations of filesystem XATTRs. BlueStore might assign objects to other placement groups quickly within the block database without the overhead of moving files from one directory to another. The block database can store the checksum of the stored object data and its metadata, allowing full data checksum operations for each read, which is more efficient than periodic scrubbing to detect bit rot. BlueStore can compress an object and the block database can store the algorithm used to compress an object- ensuring that read operations select the appropriate algorithm for decompression. Write-ahead Log: In BlueStore , the write-ahead log ensures Atomicity and it logs all aspects of each transaction. The BlueStore write-ahead log or WAL can perform this function simultaneously. BlueStore can deploy the WAL on the same device for storing object data, or it may deploy the WAL on another device, usually when the primary block device is a hard disk drive and an SSD or NVMe will improve performance. Note It is only helpful to store a block database or a write-ahead log on a separate block device if the separate device is faster than the primary storage device. For example, SSD and NVMe devices are generally faster than HDDs. Placing the block database and the WAL on separate devices may also have performance benefits due to differences in their workloads. 2.10. Ceph self management operations Ceph clusters perform a lot of self monitoring and management operations automatically. For example, Ceph OSDs can check the cluster health and report back to the Ceph monitors. By using CRUSH to assign objects to placement groups and placement groups to a set of OSDs, Ceph OSDs can use the CRUSH algorithm to rebalance the cluster or recover from OSD failures dynamically. 2.11. Ceph heartbeat Ceph OSDs join a cluster and report to Ceph Monitors on their status. At the lowest level, the Ceph OSD status is up or down reflecting whether or not it is running and able to service Ceph client requests. If a Ceph OSD is down and in the Ceph storage cluster, this status may indicate the failure of the Ceph OSD. If a Ceph OSD is not running for example, it crashes- the Ceph OSD cannot notify the Ceph Monitor that it is down . The Ceph Monitor can ping a Ceph OSD daemon periodically to ensure that it is running. However, heartbeating also empowers Ceph OSDs to determine if a neighboring OSD is down , to update the cluster map and to report it to the Ceph Monitors. This means that Ceph Monitors can remain light weight processes. 2.12. Ceph peering Ceph stores copies of placement groups on multiple OSDs. Each copy of a placement group has a status. These OSDs "peer" check each other to ensure that they agree on the status of each copy of the PG. Peering issues usually resolve themselves. Note When Ceph monitors agree on the state of the OSDs storing a placement group, that does not mean that the placement group has the latest contents. When Ceph stores a placement group in an acting set of OSDs, refer to them as Primary , Secondary , and so forth. By convention, the Primary is the first OSD in the Acting Set . The Primary that stores the first copy of a placement group is responsible for coordinating the peering process for that placement group. The Primary is the ONLY OSD that will accept client-initiated writes to objects for a given placement group where it acts as the Primary . An Acting Set is a series of OSDs that are responsible for storing a placement group. An Acting Set may refer to the Ceph OSD Daemons that are currently responsible for the placement group, or the Ceph OSD Daemons that were responsible for a particular placement group as of some epoch. The Ceph OSD daemons that are part of an Acting Set may not always be up . When an OSD in the Acting Set is up , it is part of the Up Set . The Up Set is an important distinction, because Ceph can remap PGs to other Ceph OSDs when an OSD fails. Note In an Acting Set for a PG containing osd.25 , osd.32 and osd.61 , the first OSD, osd.25 , is the Primary . If that OSD fails, the Secondary , osd.32 , becomes the Primary , and Ceph will remove osd.25 from the Up Set . 2.13. Ceph rebalancing and recovery When an administrator adds a Ceph OSD to a Ceph storage cluster, Ceph updates the cluster map. This change to the cluster map also changes object placement, because the modified cluster map changes an input for the CRUSH calculations. CRUSH places data evenly, but pseudo randomly. So only a small amount of data moves when an administrator adds a new OSD. The amount of data is usually the number of new OSDs divided by the total amount of data in the cluster. For example, in a cluster with 50 OSDs, 1/50th or 2% of the data might move when adding an OSD. The following diagram depicts the rebalancing process where some, but not all of the PGs migrate from existing OSDs, OSD 1 and 2 in the diagram, to the new OSD, OSD 3, in the diagram. Even when rebalancing, CRUSH is stable. Many of the placement groups remain in their original configuration, and each OSD gets some added capacity, so there are no load spikes on the new OSD after the cluster rebalances. There are 2 types of balancers in Ceph: Capacity balancing : Capacity balancing is a functional need. When one device is full, the system can not take write requests anymore. To avoid filling up devices, it is important to balance capacity across the devices in a fair way. Each device must get a capacity proportional to its size so all devices have the same fullness level. Capacity balancing creates fair share workloads on the OSDs for write requests from a performance perspective. Capacity balancing requires data movement and is a time-consuming operation as it takes time to balance the system. For optimal write performance, ensure that all devices are homogeneous (same size and performance). Read balancing : [Technology Preview] Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details. Read balancing is a performance need. It helps the system perform better by ensuring that each device gets its fair share of primary OSDs so read requests get distributed across OSDs in the cluster, evenly. Unbalanced read requests lead to bad performance under load due to the weakest link in the chain effect and to reduced cluster read bandwidth. Read balancing is cheap and the operation is fast as there is no data movement involved. It is a metadata operation, where the osdmap is updated to change which participating OSD in a pg is primary. Read balancing supports replicated pools only, erasure coded pools are not supported. Read balancing does not take into account the device class of the OSDs and the availability zones of DR solutions. An offline tool is available to use the read balancing feature and you need to run the procedure on each pool of the cluster. You need to run the read balancing procedure again after every autoscale change. For optimal read performance, ensure that all devices are homogeneous (same size and performance) and that you have balanced the capacity. 2.14. Ceph data integrity As part of maintaining data integrity, Ceph provides numerous mechanisms to guard against bad disk sectors and bit rot. Scrubbing: Ceph OSD Daemons can scrub objects within placement groups. That is, Ceph OSD Daemons can compare object metadata in one placement group with its replicas in placement groups stored on other OSDs. Scrubbing- usually performed daily- catches bugs or storage errors. Ceph OSD Daemons also perform deeper scrubbing by comparing data in objects bit-for-bit. Deep scrubbing- usually performed weekly- finds bad sectors on a drive that weren't apparent in a light scrub. CRC Checks: In Red Hat Ceph Storage 7 when using BlueStore , Ceph can ensure data integrity by conducting a cyclical redundancy check (CRC) on write operations; then, store the CRC value in the block database. On read operations, Ceph can retrieve the CRC value from the block database and compare it with the generated CRC of the retrieved data to ensure data integrity instantly. 2.15. Ceph high availability In addition to the high scalability enabled by the CRUSH algorithm, Ceph must also maintain high availability. This means that Ceph clients must be able to read and write data even when the cluster is in a degraded state, or when a monitor fails. 2.16. Clustering the Ceph Monitor Before Ceph clients can read or write data, they must contact a Ceph Monitor to obtain the most recent copy of the cluster map. A Red Hat Ceph Storage cluster can operate with a single monitor; however, this introduces a single point of failure. That is, if the monitor goes down, Ceph clients cannot read or write data. For added reliability and fault tolerance, Ceph supports a cluster of monitors. In a cluster of Ceph Monitors, latency and other faults can cause one or more monitors to fall behind the current state of the cluster. For this reason, Ceph must have agreement among various monitor instances regarding the state of the storage cluster. Ceph always uses a majority of monitors and the Paxos algorithm to establish a consensus among the monitors about the current state of the storage cluster. Ceph Monitors nodes require NTP to prevent clock drift. Storage administrators usually deploy Ceph with an odd number of monitors so determining a majority is efficient. For example, a majority may be 1, 2:3, 3:5, 4:6, and so forth. | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/architecture_guide/the-core-ceph-components |
Chapter 9. Metal3Remediation [infrastructure.cluster.x-k8s.io/v1beta1] | Chapter 9. Metal3Remediation [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3Remediation is the Schema for the metal3remediations API. Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Metal3RemediationSpec defines the desired state of Metal3Remediation. status object Metal3RemediationStatus defines the observed state of Metal3Remediation. 9.1.1. .spec Description Metal3RemediationSpec defines the desired state of Metal3Remediation. Type object Property Type Description strategy object Strategy field defines remediation strategy. 9.1.2. .spec.strategy Description Strategy field defines remediation strategy. Type object Property Type Description retryLimit integer Sets maximum number of remediation retries. timeout string Sets the timeout between remediation retries. type string Type of remediation. 9.1.3. .status Description Metal3RemediationStatus defines the observed state of Metal3Remediation. Type object Property Type Description lastRemediated string LastRemediated identifies when the host was last remediated phase string Phase represents the current phase of machine remediation. E.g. Pending, Running, Done etc. retryCount integer RetryCount can be used as a counter during the remediation. Field can hold number of reboots etc. 9.2. API endpoints The following API endpoints are available: /apis/infrastructure.cluster.x-k8s.io/v1beta1/metal3remediations GET : list objects of kind Metal3Remediation /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediations DELETE : delete collection of Metal3Remediation GET : list objects of kind Metal3Remediation POST : create a Metal3Remediation /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediations/{name} DELETE : delete a Metal3Remediation GET : read the specified Metal3Remediation PATCH : partially update the specified Metal3Remediation PUT : replace the specified Metal3Remediation /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediations/{name}/status GET : read status of the specified Metal3Remediation PATCH : partially update status of the specified Metal3Remediation PUT : replace status of the specified Metal3Remediation 9.2.1. /apis/infrastructure.cluster.x-k8s.io/v1beta1/metal3remediations HTTP method GET Description list objects of kind Metal3Remediation Table 9.1. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationList schema 401 - Unauthorized Empty 9.2.2. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediations HTTP method DELETE Description delete collection of Metal3Remediation Table 9.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Metal3Remediation Table 9.3. HTTP responses HTTP code Reponse body 200 - OK Metal3RemediationList schema 401 - Unauthorized Empty HTTP method POST Description create a Metal3Remediation Table 9.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.5. Body parameters Parameter Type Description body Metal3Remediation schema Table 9.6. HTTP responses HTTP code Reponse body 200 - OK Metal3Remediation schema 201 - Created Metal3Remediation schema 202 - Accepted Metal3Remediation schema 401 - Unauthorized Empty 9.2.3. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediations/{name} Table 9.7. Global path parameters Parameter Type Description name string name of the Metal3Remediation HTTP method DELETE Description delete a Metal3Remediation Table 9.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Metal3Remediation Table 9.10. HTTP responses HTTP code Reponse body 200 - OK Metal3Remediation schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Metal3Remediation Table 9.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.12. HTTP responses HTTP code Reponse body 200 - OK Metal3Remediation schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Metal3Remediation Table 9.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.14. Body parameters Parameter Type Description body Metal3Remediation schema Table 9.15. HTTP responses HTTP code Reponse body 200 - OK Metal3Remediation schema 201 - Created Metal3Remediation schema 401 - Unauthorized Empty 9.2.4. /apis/infrastructure.cluster.x-k8s.io/v1beta1/namespaces/{namespace}/metal3remediations/{name}/status Table 9.16. Global path parameters Parameter Type Description name string name of the Metal3Remediation HTTP method GET Description read status of the specified Metal3Remediation Table 9.17. HTTP responses HTTP code Reponse body 200 - OK Metal3Remediation schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Metal3Remediation Table 9.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.19. HTTP responses HTTP code Reponse body 200 - OK Metal3Remediation schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Metal3Remediation Table 9.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.21. Body parameters Parameter Type Description body Metal3Remediation schema Table 9.22. HTTP responses HTTP code Reponse body 200 - OK Metal3Remediation schema 201 - Created Metal3Remediation schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/provisioning_apis/metal3remediation-infrastructure-cluster-x-k8s-io-v1beta1 |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/updating_red_hat_jboss_enterprise_application_platform/con_making-open-source-more-inclusive |
3.2. Tracking Configuration History | 3.2. Tracking Configuration History Data from the Red Hat Virtualization History Database (called ovirt_engine_history ) can be used to track the engine database. The ETL service, ovirt-engine-dwhd , tracks three types of changes: A new entity is added to the engine database - the ETL Service replicates the change to the ovirt_engine_history database as a new entry. An existing entity is updated - the ETL Service replicates the change to the ovirt_engine_history database as a new entry. An entity is removed from the engine database - A new entry in the ovirt_engine_history database flags the corresponding entity as removed. Removed entities are only flagged as removed. The configuration tables in the ovirt_engine_history database differ from the corresponding tables in the engine database in several ways. The most apparent difference is they contain fewer configuration columns. This is because certain configuration items are less interesting to report than others and are not kept due to database size considerations. Also, columns from a few tables in the engine database appear in a single table in ovirt_engine_history and have different column names to make viewing data more convenient and comprehensible. All configuration tables contain: a history_id to indicate the configuration version of the entity; a create_date field to indicate when the entity was added to the system; an update_date field to indicate when the entity was changed; and a delete_date field to indicate the date the entity was removed from the system. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/data_warehouse_guide/tracking_configuration_history |
Chapter 51. New Drivers | Chapter 51. New Drivers Storage Drivers nvme-fabrics nvme-rdma nvmet nvmet-rdma nvme-loop qedi qedf Network Drivers qedr rdma_rxe ntb_transport ntb_perf mdev vfio_mdev amd-xgbe atlantic libcxgb ena rocker amd8111e nfp mlxsw_core mlxsw_i2c mlxsw_spectrum mlxsw_pci mlxsw_switchx2 mlxsw_switchib mlxsw_minimal Graphics Drivers and Miscellaneous Drivers ccp chcr uio_hv_generic usbip-core vhost_vsock tpm_tis_spi gpio-amdpt joydev sdio_uart ptp_kvm mei_wdt dell-rbtn dell-smo8800 intel-hid dell-smbios skx_edac kvmgt pinctrl-intel pinctrl-sunrisepoint pinctrl-amd dax_pmem dax nfit ledtrig-usbport | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/new_drivers |
Chapter 10. Post-deployment configuration suggestions | Chapter 10. Post-deployment configuration suggestions Depending on your requirements, you may want to perform some additional configuration on your newly deployed Red Hat Hyperconverged Infrastructure for Virtualization. This section contains suggested steps for additional configuration. Details on these processes are available in Maintaining Red Hat Hyperconverged Infrastructure for Virtualization . 10.1. Configure notifications See Configuring Event Notifications in the Administration Portal to configure email notifications. 10.2. (Optional)Configure Host Power Management The Red Hat Virtualization Manager 4.4 is capable of rebooting hosts that have entered a non-operational or non-responsive state, as well as preparing to power off under-utilized hosts to save power. This functionality depends on a properly configured power management device. See Configuring Host Power Management Settings for further information. 10.3. Configure backup and recovery options Red Hat recommends configuring at least basic disaster recovery capabilities on all production deployments. See Configuring backup and recovery options in Maintaining Red Hat Hyperconverged Infrastructure for Virtualization for more information. | null | https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization_on_a_single_node/post-deploy-config |
2.5. Tuned | 2.5. Tuned Tuned is a profile-based system tuning tool that uses the udev device manager to monitor connected devices, and enables both static and dynamic tuning of system settings. Dynamic tuning is an experimental feature and is turned off by default in Red Hat Enterprise Linux 7. Tuned offers predefined profiles to handle common use cases, such as high throughput, low latency, or power saving. You can modify the Tuned rules for each profile and customize how to tune a particular device. For instructions on how to create custom Tuned profiles, from PowerTOP suggestions, see the section called "Using powertop2tuned" . A profile is automatically set as default based on the product in use. You can use the tuned-adm recommend command to determine which profile Red Hat recommends as the most suitable for a particular product. If no recommendation is available, the balanced profile is set. The balanced profile, which is suitable for most workloads, balances energy consumption, performance, and latency. With the balanced profile, finishing a task quickly with the maximum available computing power usually requires less energy than performing the same task over a longer period of time with less computing power. Using the powersave profile can prolong battery life if a laptop is in an idle state, or performing only computationally undemanding operations. For such operations, higher latency in return for lower energy consumption is generally acceptable, or the operations do not need to be finished quickly, for example using IRC, viewing simple web pages, or playing audio and video files. For detailed information on Tuned and power-saving profiles provided with tuned-adm , see the Tuned chapter in the Red Hat Enterprise Linux 7 Performance Tuning Guide . Using powertop2tuned The powertop2tuned utility allows you to create custom Tuned profiles from PowerTOP suggestions. For information on PowerTOP , see Section 2.2, "PowerTOP" . To install the powertop2tuned utility, use: To create a custom profile, use: By default, powertop2tuned creates profiles in the /etc/tuned/ directory, and bases the custom profile on the currently selected Tuned profile. For safety reasons, all PowerTOP tunings are initially disabled in the new profile. To enable tunings, uncomment them in the /etc/tuned/ profile_name /tuned.conf file. You can use the --enable or -e option to generate a new profile that enables most of the tunings suggested by PowerTOP . Certain potentially problematic tunings, such as the USB autosuspend, are disabled by default and need to be uncommented manually. By default, the new profile is not activated. To activate it, use: For a complete list of options that powertop2tuned supports, use: | [
"~]# yum install tuned-utils",
"~]# powertop2tuned new_profile_name",
"~]# tuned-adm profile new_profile_name",
"~]USD powertop2tuned --help"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/power_management_guide/Tuned |
Chapter 6. Message delivery | Chapter 6. Message delivery 6.1. Writing to a streamed large message To write to a large message, use the BytesMessage.writeBytes() method. The following example reads bytes from a file and writes them to a message: Example: Writing to a streamed large message BytesMessage message = session.createBytesMessage(); File inputFile = new File(inputFilePath); InputStream inputStream = new FileInputStream(inputFile); int numRead; byte[] buffer = new byte[1024]; while ((numRead = inputStream.read(buffer, 0, buffer.length)) != -1) { message.writeBytes(buffer, 0, numRead); } 6.2. Reading from a streamed large message To read from a large message, use the BytesMessage.readBytes() method. The following example reads bytes from a message and writes them to a file: Example: Reading from a streamed large message BytesMessage message = (BytesMessage) consumer.receive(); File outputFile = new File(outputFilePath); OutputStream outputStream = new FileOutputStream(outputFile); int numRead; byte buffer[] = new byte[1024]; for (int pos = 0; pos < message.getBodyLength(); pos += buffer.length) { numRead = message.readBytes(buffer); outputStream.write(buffer, 0, numRead); } | [
"BytesMessage message = session.createBytesMessage(); File inputFile = new File(inputFilePath); InputStream inputStream = new FileInputStream(inputFile); int numRead; byte[] buffer = new byte[1024]; while ((numRead = inputStream.read(buffer, 0, buffer.length)) != -1) { message.writeBytes(buffer, 0, numRead); }",
"BytesMessage message = (BytesMessage) consumer.receive(); File outputFile = new File(outputFilePath); OutputStream outputStream = new FileOutputStream(outputFile); int numRead; byte buffer[] = new byte[1024]; for (int pos = 0; pos < message.getBodyLength(); pos += buffer.length) { numRead = message.readBytes(buffer); outputStream.write(buffer, 0, numRead); }"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_openwire_jms_client/message_delivery |
Creating skills and knowledge YAML files | Creating skills and knowledge YAML files Red Hat Enterprise Linux AI 1.4 Guidelines on creating skills and knowledge YAML files Red Hat RHEL AI Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/creating_skills_and_knowledge_yaml_files/index |
Chapter 4. Configuring persistent storage | Chapter 4. Configuring persistent storage 4.1. Persistent storage using AWS Elastic Block Store OpenShift Container Platform supports Amazon Elastic Block Store (EBS) volumes. You can provision your OpenShift Container Platform cluster with persistent storage by using Amazon EC2 . The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can dynamically provision Amazon EBS volumes. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. You can define a KMS key to encrypt container-persistent volumes on AWS. By default, newly created clusters using OpenShift Container Platform version 4.10 and later use gp3 storage and the AWS EBS CSI driver . Important High-availability of storage in the infrastructure is left to the underlying storage provider. Important OpenShift Container Platform 4.12 and later provides automatic migration for the AWS Block in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . 4.1.1. Creating the EBS storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.1.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the previously-created storage class from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This selection determines the read and write access for the storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.1.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This verification enables you to use unformatted AWS volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.1.4. Maximum number of EBS volumes on a node By default, OpenShift Container Platform supports a maximum of 39 EBS volumes attached to one node. This limit is consistent with the AWS volume limits . The volume limit depends on the instance type. Important As a cluster administrator, you must use either in-tree or Container Storage Interface (CSI) volumes and their respective storage classes, but never both volume types at the same time. The maximum attached EBS volume number is counted separately for in-tree and CSI volumes, which means you could have up to 39 EBS volumes of each type. For information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plug-ins, see AWS Elastic Block Store CSI Driver Operator . 4.1.5. Encrypting container persistent volumes on AWS with a KMS key Defining a KMS key to encrypt container-persistent volumes on AWS is useful when you have explicit compliance and security guidelines when deploying to AWS. Prerequisites Underlying infrastructure must contain storage. You must create a customer KMS key on AWS. Procedure Create a storage class: USD cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: "true" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF 1 Specifies the name of the storage class. 2 File system that is created on provisioned volumes. 3 Specifies the full Amazon Resource Name (ARN) of the key to use when encrypting the container-persistent volume. If you do not provide any key, but the encrypted field is set to true , then the default KMS key is used. See Finding the key ID and key ARN on AWS in the AWS documentation. Create a persistent volume claim (PVC) with the storage class specifying the KMS key: USD cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF Create workload containers to consume the PVC: USD cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF 4.1.6. Additional resources See AWS Elastic Block Store CSI Driver Operator for information about accessing additional storage options, such as volume snapshots, that are not possible with in-tree volume plugins. 4.2. Persistent storage using Azure OpenShift Container Platform supports Microsoft Azure Disk volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Azure Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform 4.11 and later provides automatic migration for the Azure Disk in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Microsoft Azure Disk 4.2.1. Creating the Azure storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. Procedure In the OpenShift Container Platform console, click Storage Storage Classes . In the storage class overview, click Create Storage Class . Define the desired options on the page that appears. Enter a name to reference the storage class. Enter an optional description. Select the reclaim policy. Select kubernetes.io/azure-disk from the drop down list. Enter the storage account type. This corresponds to your Azure storage account SKU tier. Valid options are Premium_LRS , Standard_LRS , StandardSSD_LRS , and UltraSSD_LRS . Enter the kind of account. Valid options are shared , dedicated, and managed . Important Red Hat only supports the use of kind: Managed in the storage class. With Shared and Dedicated , Azure creates unmanaged disks, while OpenShift Container Platform creates a managed disk for machine OS (root) disks. But because Azure Disk does not allow the use of both managed and unmanaged disks on a node, unmanaged disks created with Shared or Dedicated cannot be attached to OpenShift Container Platform nodes. Enter additional parameters for the storage class as desired. Click Create to create the storage class. Additional resources Azure Disk Storage Class 4.2.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the previously-created storage class from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This selection determines the read and write access for the storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.2.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that it contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This allows using unformatted Azure volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.2.4. Machine sets that deploy machines with ultra disks using PVCs You can create a machine set running on Azure that deploys machines with ultra disks. Ultra disks are high-performance storage that are intended for use with the most demanding data workloads. Both the in-tree plugin and CSI driver support using PVCs to enable ultra disks. You can also deploy machines with ultra disks as data disks without creating a PVC. Additional resources Microsoft Azure ultra disks documentation Machine sets that deploy machines on ultra disks using CSI PVCs Machine sets that deploy machines on ultra disks as data disks 4.2.4.1. Creating machines with ultra disks by using machine sets You can deploy machines with ultra disks on Azure by editing your machine set YAML file. Prerequisites Have an existing Microsoft Azure cluster. Procedure Copy an existing Azure MachineSet custom resource (CR) and edit it by running the following command: USD oc edit machineset <machine-set-name> where <machine-set-name> is the machine set that you want to provision machines with ultra disks. Add the following lines in the positions indicated: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2 1 Specify a label to use to select a node that is created by this machine set. This procedure uses disk.ultrassd for this value. 2 These lines enable the use of ultra disks. Create a machine set using the updated configuration by running the following command: USD oc create -f <machine-set-name>.yaml Create a storage class that contains the following YAML definition: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: "2000" 2 diskMbpsReadWrite: "320" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5 1 Specify the name of the storage class. This procedure uses ultra-disk-sc for this value. 2 Specify the number of IOPS for the storage class. 3 Specify the throughput in MBps for the storage class. 4 For Azure Kubernetes Service (AKS) version 1.21 or later, use disk.csi.azure.com . For earlier versions of AKS, use kubernetes.io/azure-disk . 5 Optional: Specify this parameter to wait for the creation of the pod that will use the disk. Create a persistent volume claim (PVC) to reference the ultra-disk-sc storage class that contains the following YAML definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3 1 Specify the name of the PVC. This procedure uses ultra-disk for this value. 2 This PVC references the ultra-disk-sc storage class. 3 Specify the size for the storage class. The minimum value is 4Gi . Create a pod that contains the following YAML definition: apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - "sleep" - "infinity" volumeMounts: - mountPath: "/mnt/azure" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2 1 Specify the label of the machine set that enables the use of ultra disks. This procedure uses disk.ultrassd for this value. 2 This pod references the ultra-disk PVC. Verification Validate that the machines are created by running the following command: USD oc get machines The machines should be in the Running state. For a machine that is running and has a node attached, validate the partition by running the following command: USD oc debug node/<node-name> -- chroot /host lsblk In this command, oc debug node/<node-name> starts a debugging shell on the node <node-name> and passes a command with -- . The passed command chroot /host provides access to the underlying host OS binaries, and lsblk shows the block devices that are attached to the host OS machine. steps To use an ultra disk from within a pod, create a workload that uses the mount point. Create a YAML file similar to the following example: apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: "http-server" volumeMounts: - name: lun0p1 mountPath: "/tmp" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd 4.2.4.2. Troubleshooting resources for machine sets that enable ultra disks Use the information in this section to understand and recover from issues you might encounter. 4.2.4.2.1. Unable to mount a persistent volume claim backed by an ultra disk If there is an issue mounting a persistent volume claim backed by an ultra disk, the pod becomes stuck in the ContainerCreating state and an alert is triggered. For example, if the additionalCapabilities.ultraSSDEnabled parameter is not set on the machine that backs the node that hosts the pod, the following error message appears: StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set. To resolve this issue, describe the pod by running the following command: USD oc -n <stuck_pod_namespace> describe pod <stuck_pod_name> 4.3. Persistent storage using Azure File OpenShift Container Platform supports Microsoft Azure File volumes. You can provision your OpenShift Container Platform cluster with persistent storage using Azure. Some familiarity with Kubernetes and Azure is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. You can provision Azure File volumes dynamically. Persistent volumes are not bound to a single project or namespace, and you can share them across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace, and can be requested by users for use in applications. Important High availability of storage in the infrastructure is left to the underlying storage provider. Important Azure File volumes use Server Message Block. Important OpenShift Container Platform 4.13 and later provides automatic migration for the Azure File in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Additional resources Azure Files 4.3.1. Create the Azure File share persistent volume claim To create the persistent volume claim, you must first define a Secret object that contains the Azure account and key. This secret is used in the PersistentVolume definition, and will be referenced by the persistent volume claim for use in applications. Prerequisites An Azure File share exists. The credentials to access this share, specifically the storage account and key, are available. Procedure Create a Secret object that contains the Azure File credentials: USD oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2 1 The Azure File storage account name. 2 The Azure File storage account key. Create a PersistentVolume object that references the Secret object you created: apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false 1 The name of the persistent volume. 2 The size of this persistent volume. 3 The name of the secret that contains the Azure File share credentials. 4 The name of the Azure File share. Create a PersistentVolumeClaim object that maps to the persistent volume you created: apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "claim1" 1 spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "5Gi" 2 storageClassName: azure-file-sc 3 volumeName: "pv0001" 4 1 The name of the persistent volume claim. 2 The size of this persistent volume claim. 3 The name of the storage class that is used to provision the persistent volume. Specify the storage class used in the PersistentVolume definition. 4 The name of the existing PersistentVolume object that references the Azure File share. 4.3.2. Mount the Azure File share in a pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying Azure File share. Procedure Create a pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... volumeMounts: - mountPath: "/data" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3 1 The name of the pod. 2 The path to mount the Azure File share inside the pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the PersistentVolumeClaim object that has been previously created. 4.4. Persistent storage using Cinder OpenShift Container Platform supports OpenStack Cinder. Some familiarity with Kubernetes and OpenStack is assumed. Cinder volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform 4.11 and later provides automatic migration for the Cinder in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Additional resources For more information about how OpenStack Block Storage provides persistent block storage management for virtual hard drives, see OpenStack Cinder . 4.4.1. Manual provisioning with Cinder Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Prerequisites OpenShift Container Platform configured for Red Hat OpenStack Platform (RHOSP) Cinder volume ID 4.4.1.1. Creating the persistent volume You must define your persistent volume (PV) in an object definition before creating it in OpenShift Container Platform: Procedure Save your object definition to a file. cinder-persistentvolume.yaml apiVersion: "v1" kind: "PersistentVolume" metadata: name: "pv0001" 1 spec: capacity: storage: "5Gi" 2 accessModes: - "ReadWriteOnce" cinder: 3 fsType: "ext3" 4 volumeID: "f37a03aa-6212-4c62-a805-9ce139fab180" 5 1 The name of the volume that is used by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 Indicates cinder for Red Hat OpenStack Platform (RHOSP) Cinder volumes. 4 The file system that is created when the volume is mounted for the first time. 5 The Cinder volume to use. Important Do not change the fstype parameter value after the volume is formatted and provisioned. Changing this value can result in data loss and pod failure. Create the object definition file you saved in the step. USD oc create -f cinder-persistentvolume.yaml 4.4.1.2. Persistent volume formatting You can use unformatted Cinder volumes as PVs because OpenShift Container Platform formats them before the first use. Before OpenShift Container Platform mounts the volume and passes it to a container, the system checks that it contains a file system as specified by the fsType parameter in the PV definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. 4.4.1.3. Cinder volume security If you use Cinder PVs in your application, configure security for their deployment configurations. Prerequisites An SCC must be created that uses the appropriate fsGroup strategy. Procedure Create a service account and add it to the SCC: USD oc create serviceaccount <service_account> USD oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project> In your application's deployment configuration, provide the service account name and securityContext : apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7 1 The number of copies of the pod to run. 2 The label selector of the pod to run. 3 A template for the pod that the controller creates. 4 The labels on the pod. They must include labels from the label selector. 5 The maximum name length after expanding any parameters is 63 characters. 6 Specifies the service account you created. 7 Specifies an fsGroup for the pods. 4.5. Persistent storage using Fibre Channel OpenShift Container Platform supports Fibre Channel, allowing you to provision your OpenShift Container Platform cluster with persistent storage using Fibre channel volumes. Some familiarity with Kubernetes and Fibre Channel is assumed. Important Persistent storage using Fibre Channel is not supported on ARM architecture based infrastructures. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources Using Fibre Channel devices 4.5.1. Provisioning To provision Fibre Channel volumes using the PersistentVolume API the following must be available: The targetWWNs (array of Fibre Channel target's World Wide Names). A valid LUN number. The filesystem type. A persistent volume and a LUN have a one-to-one mapping between them. Prerequisites Fibre Channel LUNs must exist in the underlying infrastructure. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4 1 World wide identifiers (WWIDs). Either FC wwids or a combination of FC targetWWNs and lun must be set, but not both simultaneously. The FC WWID identifier is recommended over the WWNs target because it is guaranteed to be unique for every storage device, and independent of the path that is used to access the device. The WWID identifier can be obtained by issuing a SCSI Inquiry to retrieve the Device Identification Vital Product Data ( page 0x83 ) or Unit Serial Number ( page 0x80 ). FC WWIDs are identified as /dev/disk/by-id/ to reference the data on the disk, even if the path to the device changes and even when accessing the device from different systems. 2 3 Fibre Channel WWNs are identified as /dev/disk/by-path/pci-<IDENTIFIER>-fc-0x<WWN>-lun-<LUN#> , but you do not need to provide any part of the path leading up to the WWN , including the 0x , and anything after, including the - (hyphen). Important Changing the value of the fstype parameter after the volume has been formatted and provisioned can result in data loss and pod failure. 4.5.1.1. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is mapped to a single persistent volume, and unique names must be used for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.5.1.2. Fibre Channel volume security Users request storage with a persistent volume claim. This claim only lives in the user's namespace, and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume across a namespace causes the pod to fail. Each Fibre Channel LUN must be accessible by all nodes in the cluster. 4.6. Persistent storage using FlexVolume Important FlexVolume is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. Out-of-tree Container Storage Interface (CSI) driver is the recommended way to write volume drivers in OpenShift Container Platform. Maintainers of FlexVolume drivers should implement a CSI driver and move users of FlexVolume to CSI. Users of FlexVolume should move their workloads to CSI driver. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. OpenShift Container Platform supports FlexVolume, an out-of-tree plugin that uses an executable model to interface with drivers. To use storage from a back-end that does not have a built-in plugin, you can extend OpenShift Container Platform through FlexVolume drivers and provide persistent storage to applications. Pods interact with FlexVolume drivers through the flexvolume in-tree plugin. Additional resources Expanding persistent volumes 4.6.1. About FlexVolume drivers A FlexVolume driver is an executable file that resides in a well-defined directory on all nodes in the cluster. OpenShift Container Platform calls the FlexVolume driver whenever it needs to mount or unmount a volume represented by a PersistentVolume object with flexVolume as the source. Important Attach and detach operations are not supported in OpenShift Container Platform for FlexVolume. 4.6.2. FlexVolume driver example The first command-line argument of the FlexVolume driver is always an operation name. Other parameters are specific to each operation. Most of the operations take a JavaScript Object Notation (JSON) string as a parameter. This parameter is a complete JSON string, and not the name of a file with the JSON data. The FlexVolume driver contains: All flexVolume.options . Some options from flexVolume prefixed by kubernetes.io/ , such as fsType and readwrite . The content of the referenced secret, if specified, prefixed by kubernetes.io/secret/ . FlexVolume driver JSON input example { "fooServer": "192.168.0.1:1234", 1 "fooVolumeName": "bar", "kubernetes.io/fsType": "ext4", 2 "kubernetes.io/readwrite": "ro", 3 "kubernetes.io/secret/<key name>": "<key value>", 4 "kubernetes.io/secret/<another key name>": "<another key value>", } 1 All options from flexVolume.options . 2 The value of flexVolume.fsType . 3 ro / rw based on flexVolume.readOnly . 4 All keys and their values from the secret referenced by flexVolume.secretRef . OpenShift Container Platform expects JSON data on standard output of the driver. When not specified, the output describes the result of the operation. FlexVolume driver default output example { "status": "<Success/Failure/Not supported>", "message": "<Reason for success/failure>" } Exit code of the driver should be 0 for success and 1 for error. Operations should be idempotent, which means that the mounting of an already mounted volume should result in a successful operation. 4.6.3. Installing FlexVolume drivers FlexVolume drivers that are used to extend OpenShift Container Platform are executed only on the node. To implement FlexVolumes, a list of operations to call and the installation path are all that is required. Prerequisites FlexVolume drivers must implement these operations: init Initializes the driver. It is called during initialization of all nodes. Arguments: none Executed on: node Expected output: default JSON mount Mounts a volume to directory. This can include anything that is necessary to mount the volume, including finding the device and then mounting the device. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmount Unmounts a volume from a directory. This can include anything that is necessary to clean up the volume after unmounting. Arguments: <mount-dir> Executed on: node Expected output: default JSON mountdevice Mounts a volume's device to a directory where individual pods can then bind mount. This call-out does not pass "secrets" specified in the FlexVolume spec. If your driver requires secrets, do not implement this call-out. Arguments: <mount-dir> <json> Executed on: node Expected output: default JSON unmountdevice Unmounts a volume's device from a directory. Arguments: <mount-dir> Executed on: node Expected output: default JSON All other operations should return JSON with {"status": "Not supported"} and exit code 1 . Procedure To install the FlexVolume driver: Ensure that the executable file exists on all nodes in the cluster. Place the executable file at the volume plugin path: /etc/kubernetes/kubelet-plugins/volume/exec/<vendor>~<driver>/<driver> . For example, to install the FlexVolume driver for the storage foo , place the executable file at: /etc/kubernetes/kubelet-plugins/volume/exec/openshift.com~foo/foo . 4.6.4. Consuming storage using FlexVolume drivers Each PersistentVolume object in OpenShift Container Platform represents one storage asset in the storage back-end, such as a volume. Procedure Use the PersistentVolume object to reference the installed storage. Persistent volume object definition using FlexVolume drivers example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: "ext4" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar 1 The name of the volume. This is how it is identified through persistent volume claims or from pods. This name can be different from the name of the volume on back-end storage. 2 The amount of storage allocated to this volume. 3 The name of the driver. This field is mandatory. 4 The file system that is present on the volume. This field is optional. 5 The reference to a secret. Keys and values from this secret are provided to the FlexVolume driver on invocation. This field is optional. 6 The read-only flag. This field is optional. 7 The additional options for the FlexVolume driver. In addition to the flags specified by the user in the options field, the following flags are also passed to the executable: Note Secrets are passed only to mount or unmount call-outs. 4.7. Persistent storage using GCE Persistent Disk OpenShift Container Platform supports GCE Persistent Disk volumes (gcePD). You can provision your OpenShift Container Platform cluster with persistent storage using GCE. Some familiarity with Kubernetes and GCE is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. GCE Persistent Disk volumes can be provisioned dynamically. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important OpenShift Container Platform 4.12 and later provides automatic migration for the GCE Persist Disk in-tree volume plugin to its equivalent CSI driver. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . Important High availability of storage in the infrastructure is left to the underlying storage provider. Additional resources GCE Persistent Disk 4.7.1. Creating the GCE storage class Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes. 4.7.2. Creating the persistent volume claim Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the desired options on the page that appears. Select the previously-created storage class from the drop-down menu. Enter a unique name for the storage claim. Select the access mode. This selection determines the read and write access for the storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.7.3. Volume format Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system as specified by the fsType parameter in the persistent volume definition. If the device is not formatted with the file system, all data from the device is erased and the device is automatically formatted with the given file system. This verification enables you to use unformatted GCE volumes as persistent volumes, because OpenShift Container Platform formats them before the first use. 4.8. Persistent storage using iSCSI You can provision your OpenShift Container Platform cluster with persistent storage using iSCSI . Some familiarity with Kubernetes and iSCSI is assumed. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Important High-availability of storage in the infrastructure is left to the underlying storage provider. Important When you use iSCSI on Amazon Web Services, you must update the default security policy to include TCP traffic between nodes on the iSCSI ports. By default, they are ports 860 and 3260 . Important Users must ensure that the iSCSI initiator is already configured on all OpenShift Container Platform nodes by installing the iscsi-initiator-utils package and configuring their initiator name in /etc/iscsi/initiatorname.iscsi . The iscsi-initiator-utils package is already installed on deployments that use Red Hat Enterprise Linux CoreOS (RHCOS). For more information, see Managing Storage Devices . 4.8.1. Provisioning Verify that the storage exists in the underlying infrastructure before mounting it as a volume in OpenShift Container Platform. All that is required for the iSCSI is the iSCSI target portal, a valid iSCSI Qualified Name (IQN), a valid LUN number, the filesystem type, and the PersistentVolume API. PersistentVolume object definition apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4' 4.8.2. Enforcing disk quotas Use LUN partitions to enforce disk quotas and size constraints. Each LUN is one persistent volume. Kubernetes enforces unique names for persistent volumes. Enforcing quotas in this way allows the end user to request persistent storage by a specific amount (for example, 10Gi ) and be matched with a corresponding volume of equal or greater capacity. 4.8.3. iSCSI volume security Users request storage with a PersistentVolumeClaim object. This claim only lives in the user's namespace and can only be referenced by a pod within that same namespace. Any attempt to access a persistent volume claim across a namespace causes the pod to fail. Each iSCSI LUN must be accessible by all nodes in the cluster. 4.8.3.1. Challenge Handshake Authentication Protocol (CHAP) configuration Optionally, OpenShift Container Platform can use CHAP to authenticate itself to iSCSI targets: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3 1 Enable CHAP authentication of iSCSI discovery. 2 Enable CHAP authentication of iSCSI session. 3 Specify name of Secrets object with user name + password. This Secret object must be available in all namespaces that can use the referenced volume. 4.8.4. iSCSI multipathing For iSCSI-based storage, you can configure multiple paths by using the same IQN for more than one target portal IP address. Multipathing ensures access to the persistent volume when one or more of the components in a path fail. To specify multi-paths in the pod specification, use the portals field. For example: apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false 1 Add additional target portals using the portals field. 4.8.5. iSCSI custom initiator IQN Configure the custom initiator iSCSI Qualified Name (IQN) if the iSCSI targets are restricted to certain IQNs, but the nodes that the iSCSI PVs are attached to are not guaranteed to have these IQNs. To specify a custom initiator IQN, use initiatorName field. apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false 1 Specify the name of the initiator. 4.9. Persistent storage using NFS OpenShift Container Platform clusters can be provisioned with persistent storage using NFS. Persistent volumes (PVs) and persistent volume claims (PVCs) provide a convenient method for sharing a volume across a project. While the NFS-specific information contained in a PV definition could also be defined directly in a Pod definition, doing so does not create the volume as a distinct cluster resource, making the volume more susceptible to conflicts. Additional resources Mounting NFS shares 4.9.1. Provisioning Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. To provision NFS volumes, a list of NFS servers and export paths are all that is required. Procedure Create an object definition for the PV: apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7 1 The name of the volume. This is the PV identity in various oc <command> pod commands. 2 The amount of storage allocated to this volume. 3 Though this appears to be related to controlling access to the volume, it is actually used similarly to labels and used to match a PVC to a PV. Currently, no access rules are enforced based on the accessModes . 4 The volume type being used, in this case the nfs plugin. 5 The path that is exported by the NFS server. 6 The hostname or IP address of the NFS server. 7 The reclaim policy for the PV. This defines what happens to a volume when released. Note Each NFS volume must be mountable by all schedulable nodes in the cluster. Verify that the PV was created: USD oc get pv Example output NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s Create a persistent volume claim that binds to the new PV: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: "" 1 The access modes do not enforce security, but rather act as labels to match a PV to a PVC. 2 This claim looks for PVs offering 5Gi or greater capacity. Verify that the persistent volume claim was created: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m 4.9.2. Enforcing disk quotas You can use disk partitions to enforce disk quotas and size constraints. Each partition can be its own export. Each export is one PV. OpenShift Container Platform enforces unique names for PVs, but the uniqueness of the NFS volume's server and path is up to the administrator. Enforcing quotas in this way allows the developer to request persistent storage by a specific amount, such as 10Gi, and be matched with a corresponding volume of equal or greater capacity. 4.9.3. NFS volume security This section covers NFS volume security, including matching permissions and SELinux considerations. The user is expected to understand the basics of POSIX permissions, process UIDs, supplemental groups, and SELinux. Developers request NFS storage by referencing either a PVC by name or the NFS volume plugin directly in the volumes section of their Pod definition. The /etc/exports file on the NFS server contains the accessible NFS directories. The target NFS directory has POSIX owner and group IDs. The OpenShift Container Platform NFS plugin mounts the container's NFS directory with the same POSIX ownership and permissions found on the exported NFS directory. However, the container is not run with its effective UID equal to the owner of the NFS mount, which is the desired behavior. As an example, if the target NFS directory appears on the NFS server as: USD ls -lZ /opt/nfs -d Example output drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs USD id nfsnobody Example output uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody) Then the container must match SELinux labels, and either run with a UID of 65534 , the nfsnobody owner, or with 5555 in its supplemental groups to access the directory. Note The owner ID of 65534 is used as an example. Even though NFS's root_squash maps root , uid 0 , to nfsnobody , uid 65534 , NFS exports can have arbitrary owner IDs. Owner 65534 is not required for NFS exports. 4.9.3.1. Group IDs The recommended way to handle NFS access, assuming it is not an option to change permissions on the NFS export, is to use supplemental groups. Supplemental groups in OpenShift Container Platform are used for shared storage, of which NFS is an example. In contrast, block storage such as iSCSI uses the fsGroup SCC strategy and the fsGroup value in the securityContext of the pod. Note To gain access to persistent storage, it is generally preferable to use supplemental group IDs versus user IDs. Because the group ID on the example target NFS directory is 5555 , the pod can define that group ID using supplementalGroups under the securityContext definition of the pod. For example: spec: containers: - name: ... securityContext: 1 supplementalGroups: [5555] 2 1 securityContext must be defined at the pod level, not under a specific container. 2 An array of GIDs defined for the pod. In this case, there is one element in the array. Additional GIDs would be comma-separated. Assuming there are no custom SCCs that might satisfy the pod requirements, the pod likely matches the restricted SCC. This SCC has the supplementalGroups strategy set to RunAsAny , meaning that any supplied group ID is accepted without range checking. As a result, the above pod passes admissions and is launched. However, if group ID range checking is desired, a custom SCC is the preferred solution. A custom SCC can be created such that minimum and maximum group IDs are defined, group ID range checking is enforced, and a group ID of 5555 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.9.3.2. User IDs User IDs can be defined in the container image or in the Pod definition. Note It is generally preferable to use supplemental group IDs to gain access to persistent storage versus using user IDs. In the example target NFS directory shown above, the container needs its UID set to 65534 , ignoring group IDs for the moment, so the following can be added to the Pod definition: spec: containers: 1 - name: ... securityContext: runAsUser: 65534 2 1 Pods contain a securityContext definition specific to each container and a pod's securityContext which applies to all containers defined in the pod. 2 65534 is the nfsnobody user. Assuming that the project is default and the SCC is restricted , the user ID of 65534 as requested by the pod is not allowed. Therefore, the pod fails for the following reasons: It requests 65534 as its user ID. All SCCs available to the pod are examined to see which SCC allows a user ID of 65534 . While all policies of the SCCs are checked, the focus here is on user ID. Because all available SCCs use MustRunAsRange for their runAsUser strategy, UID range checking is required. 65534 is not included in the SCC or project's user ID range. It is generally considered a good practice not to modify the predefined SCCs. The preferred way to fix this situation is to create a custom SCC A custom SCC can be created such that minimum and maximum user IDs are defined, UID range checking is still enforced, and the UID of 65534 is allowed. Note To use a custom SCC, you must first add it to the appropriate service account. For example, use the default service account in the given project unless another has been specified on the Pod specification. 4.9.3.3. SELinux Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems are configured to use SELinux on remote NFS servers by default. For non-RHEL and non-RHCOS systems, SELinux does not allow writing from a pod to a remote NFS server. The NFS volume mounts correctly but it is read-only. You will need to enable the correct SELinux permissions by using the following procedure. Prerequisites The container-selinux package must be installed. This package provides the virt_use_nfs SELinux boolean. Procedure Enable the virt_use_nfs boolean using the following command. The -P option makes this boolean persistent across reboots. # setsebool -P virt_use_nfs 1 4.9.3.4. Export settings To enable arbitrary container users to read and write the volume, each exported volume on the NFS server should conform to the following conditions: Every export must be exported using the following format: /<example_fs> *(rw,root_squash) The firewall must be configured to allow traffic to the mount point. For NFSv4, configure the default port 2049 ( nfs ). NFSv4 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT For NFSv3, there are three ports to configure: 2049 ( nfs ), 20048 ( mountd ), and 111 ( portmapper ). NFSv3 # iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT # iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT The NFS export and directory must be set up so that they are accessible by the target pods. Either set the export to be owned by the container's primary UID, or supply the pod group access using supplementalGroups , as shown in the group IDs above. 4.9.4. Reclaiming resources NFS implements the OpenShift Container Platform Recyclable plugin interface. Automatic processes handle reclamation tasks based on policies set on each persistent volume. By default, PVs are set to Retain . Once claim to a PVC is deleted, and the PV is released, the PV object should not be reused. Instead, a new PV should be created with the same basic volume details as the original. For example, the administrator creates a PV named nfs1 : apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" The user creates PVC1 , which binds to nfs1 . The user then deletes PVC1 , releasing claim to nfs1 . This results in nfs1 being Released . If the administrator wants to make the same NFS share available, they should create a new PV with the same NFS server details, but a different PV name: apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: "/" Deleting the original PV and re-creating it with the same name is discouraged. Attempting to manually change the status of a PV from Released to Available causes errors and potential data loss. 4.9.5. Additional configuration and troubleshooting Depending on what version of NFS is being used and how it is configured, there may be additional configuration steps needed for proper export and security mapping. The following are some that may apply: NFSv4 mount incorrectly shows all files with ownership of nobody:nobody Could be attributed to the ID mapping settings, found in /etc/idmapd.conf on your NFS. See this Red Hat Solution . Disabling ID mapping on NFSv4 On both the NFS client and server, run: # echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping 4.10. Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation . Important OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide . 4.11. Persistent storage using VMware vSphere volumes OpenShift Container Platform allows use of VMware vSphere's Virtual Machine Disk (VMDK) volumes. You can provision your OpenShift Container Platform cluster with persistent storage using VMware vSphere. Some familiarity with Kubernetes and VMware vSphere is assumed. VMware vSphere volumes can be provisioned dynamically. OpenShift Container Platform creates the disk in vSphere and attaches this disk to the correct image. Note OpenShift Container Platform provisions new volumes as independent persistent disks that can freely attach and detach the volume on any node in the cluster. Consequently, you cannot back up volumes that use snapshots, or restore volumes from snapshots. See Snapshot Limitations for more information. The Kubernetes persistent volume framework allows administrators to provision a cluster with persistent storage and gives users a way to request those resources without having any knowledge of the underlying infrastructure. Persistent volumes are not bound to a single project or namespace; they can be shared across the OpenShift Container Platform cluster. Persistent volume claims are specific to a project or namespace and can be requested by users. Important For vSphere: For new installations of OpenShift Container Platform 4.13, or later, automatic migration is enabled by default. Updating to OpenShift Container Platform 4.14 and later also provides automatic migration. CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes. For more information about migration, see CSI automatic migration . When updating from OpenShift Container Platform 4.12, or earlier, to 4.13, automatic CSI migration for vSphere only occurs if you opt in. If you do not opt in, OpenShift Container Platform defaults to using the in-tree (non-CSI) plugin to provision vSphere storage. Carefully review the indicated consequences before opting in to migration . Additional resources VMware vSphere 4.11.1. Dynamically provisioning VMware vSphere volumes Dynamically provisioning VMware vSphere volumes is the recommended method. 4.11.2. Prerequisites An OpenShift Container Platform cluster installed on a VMware vSphere version that meets the requirements for the components that you use. See Installing a cluster on vSphere for information about vSphere version support. You can use either of the following procedures to dynamically provision these volumes using the default storage class. 4.11.2.1. Dynamically provisioning VMware vSphere volumes using the UI OpenShift Container Platform installs a default storage class, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure In the OpenShift Container Platform console, click Storage Persistent Volume Claims . In the persistent volume claims overview, click Create Persistent Volume Claim . Define the required options on the resulting page. Select the thin storage class. Enter a unique name for the storage claim. Select the access mode to determine the read and write access for the created storage claim. Define the size of the storage claim. Click Create to create the persistent volume claim and generate a persistent volume. 4.11.2.2. Dynamically provisioning VMware vSphere volumes using the CLI OpenShift Container Platform installs a default StorageClass, named thin , that uses the thin disk format for provisioning volumes. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure (CLI) You can define a VMware vSphere PersistentVolumeClaim by creating a file, pvc.yaml , with the following contents: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml 4.11.3. Statically provisioning VMware vSphere volumes To statically provision VMware vSphere volumes you must create the virtual machine disks for reference by the persistent volume framework. Prerequisites Storage must exist in the underlying infrastructure before it can be mounted as a volume in OpenShift Container Platform. Procedure Create the virtual machine disks. Virtual machine disks (VMDKs) must be created manually before statically provisioning VMware vSphere volumes. Use either of the following methods: Create using vmkfstools . Access ESX through Secure Shell (SSH) and then use following command to create a VMDK volume: USD vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk Create using vmware-diskmanager : USD shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk Create a persistent volume that references the VMDKs. Create a file, pv1.yaml , with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: "[datastore1] volumes/myDisk" 4 fsType: ext4 5 1 The name of the volume. This name is how it is identified by persistent volume claims or pods. 2 The amount of storage allocated to this volume. 3 The volume type used, with vsphereVolume for vSphere volumes. The label is used to mount a vSphere VMDK volume into pods. The contents of a volume are preserved when it is unmounted. The volume type supports VMFS and VSAN datastore. 4 The existing VMDK volume to use. If you used vmkfstools , you must enclose the datastore name in square brackets, [] , in the volume definition, as shown previously. 5 The file system type to mount. For example, ext4, xfs, or other file systems. Important Changing the value of the fsType parameter after the volume is formatted and provisioned can result in data loss and pod failure. Create the PersistentVolume object from the file: USD oc create -f pv1.yaml Create a persistent volume claim that maps to the persistent volume you created in the step. Create a file, pvc1.yaml , with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: "1Gi" 3 volumeName: pv1 4 1 A unique name that represents the persistent volume claim. 2 The access mode of the persistent volume claim. With ReadWriteOnce, the volume can be mounted with read and write permissions by a single node. 3 The size of the persistent volume claim. 4 The name of the existing persistent volume. Create the PersistentVolumeClaim object from the file: USD oc create -f pvc1.yaml 4.11.3.1. Formatting VMware vSphere volumes Before OpenShift Container Platform mounts the volume and passes it to a container, it checks that the volume contains a file system that is specified by the fsType parameter value in the PersistentVolume (PV) definition. If the device is not formatted with the file system, all data from the device is erased, and the device is automatically formatted with the specified file system. Because OpenShift Container Platform formats them before the first use, you can use unformatted vSphere volumes as PVs. 4.12. Persistent storage using local storage 4.12.1. Local storage overview You can use any of the following solutions to provision local storage: HostPath Provisioner (HPP) Local Storage Operator (LSO) Logical Volume Manager (LVM) Storage Warning These solutions support provisioning only node-local storage. The workloads are bound to the nodes that provide the storage. If the node becomes unavailable, the workload also becomes unavailable. To maintain workload availability despite node failures, you must ensure storage data replication through active or passive replication mechanisms. 4.12.1.1. Overview of HostPath Provisioner functionality You can perform the following actions using HostPath Provisioner (HPP): Map the host filesystem paths to storage classes for provisioning local storage. Statically create storage classes to configure filesystem paths on a node for storage consumption. Statically provision Persistent Volumes (PVs) based on the storage class. Create workloads and PersistentVolumeClaims (PVCs) while being aware of the underlying storage topology. Note HPP is available in upstream Kubernetes. However, it is not recommended to use HPP from upstream Kubernetes. 4.12.1.2. Overview of Local Storage Operator functionality You can perform the following actions using Local Storage Operator (LSO): Assign the storage devices (disks or partitions) to the storage classes without modifying the device configuration. Statically provision PVs and storage classes by configuring the LocalVolume custom resource (CR). Create workloads and PVCs while being aware of the underlying storage topology. Note LSO is developed and delivered by Red Hat. 4.12.1.3. Overview of LVM Storage functionality You can perform the following actions using Logical Volume Manager (LVM) Storage: Configure storage devices (disks or partitions) as lvm2 volume groups and expose the volume groups as storage classes. Create workloads and request storage by using PVCs without considering the node topology. LVM Storage uses the TopoLVM CSI driver to dynamically allocate storage space to the nodes in the topology and provision PVs. Note LVM Storage is developed and maintained by Red Hat. The CSI driver provided with LVM Storage is the upstream project "topolvm". 4.12.1.4. Comparison of LVM Storage, LSO, and HPP The following sections compare the functionalities provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage. 4.12.1.4.1. Comparison of the support for storage types and filesystems The following table compares the support for storage types and filesystems provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage: Table 4.1. Comparison of the support for storage types and filesystems Functionality LVM Storage LSO HPP Support for block storage Yes Yes No Support for file storage Yes Yes Yes Support for object storage [1] No No No Available filesystems ext4 , xfs ext4 , xfs Any mounted system available on the node is supported. None of the solutions (LVM Storage, LSO, and HPP) provide support for object storage. Therefore, if you want to use object storage, you need an S3 object storage solution, such as MultiClusterGateway from the Red Hat OpenShift Data Foundation. All of the solutions can serve as underlying storage providers for the S3 object storage solutions. 4.12.1.4.2. Comparison of the support for core functionalities The following table compares how LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) support core functionalities for provisioning local storage: Table 4.2. Comparison of the support for core functionalities Functionality LVM Storage LSO HPP Support for automatic file system formatting Yes Yes N/A Support for dynamic provisioning Yes No No Support for using software Redundant Array of Independent Disks (RAID) arrays Yes Supported on 4.15 and later. Yes Yes Support for transparent disk encryption Yes Supported on 4.16 and later. Yes Yes Support for volume based disk encryption No No No Support for disconnected installation Yes Yes Yes Support for PVC expansion Yes No No Support for volume snapshots and volume clones Yes No No Support for thin provisioning Yes Devices are thin-provisioned by default. Yes You can configure the devices to point to the thin-provisioned volumes Yes You can configure a path to point to the thin-provisioned volumes. Support for automatic disk discovery and setup Yes Automatic disk discovery is available during installation and runtime. You can also dynamically add the disks to the LVMCluster custom resource (CR) to increase the storage capacity of the existing storage classes. Technology Preview Automatic disk discovery is available during installation. No 4.12.1.4.3. Comparison of performance and isolation capabilities The following table compares the performance and isolation capabilities of LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) in provisioning local storage. Table 4.3. Comparison of performance and isolation capabilities Functionality LVM Storage LSO HPP Performance I/O speed is shared for all workloads that use the same storage class. Block storage allows direct I/O operations. Thin provisioning can affect the performance. I/O depends on the LSO configuration. Block storage allows direct I/O operations. I/O speed is shared for all workloads that use the same storage class. The restrictions imposed by the underlying filesystem can affect the I/O speed. Isolation boundary [1] LVM Logical Volume (LV) It provides higher level of isolation compared to HPP. LVM Logical Volume (LV) It provides higher level of isolation compared to HPP Filesystem path It provides lower level of isolation compared to LSO and LVM Storage. Isolation boundary refers to the level of separation between different workloads or applications that use local storage resources. 4.12.1.4.4. Comparison of the support for additional functionalities The following table compares the additional features provided by LVM Storage, Local Storage Operator (LSO), and HostPath Provisioner (HPP) to provision local storage: Table 4.4. Comparison of the support for additional functionalities Functionality LVM Storage LSO HPP Support for generic ephemeral volumes Yes No No Support for CSI inline ephemeral volumes No No No Support for storage topology Yes Supports CSI node topology Yes LSO provides partial support for storage topology through node tolerations. No Support for ReadWriteMany (RWX) access mode [1] No No No All of the solutions (LVM Storage, LSO, and HPP) have the ReadWriteOnce (RWO) access mode. RWO access mode allows access from multiple pods on the same node. 4.12.2. Persistent storage using local volumes OpenShift Container Platform can be provisioned with persistent storage by using local volumes. Local persistent volumes allow you to access local storage devices, such as a disk or partition, by using the standard persistent volume claim interface. Local volumes can be used without manually scheduling pods to nodes because the system is aware of the volume node constraints. However, local volumes are still subject to the availability of the underlying node and are not suitable for all applications. Note Local volumes can only be used as a statically created persistent volume. 4.12.2.1. Installing the Local Storage Operator The Local Storage Operator is not installed in OpenShift Container Platform by default. Use the following procedure to install and configure this Operator to enable local volumes in your cluster. Prerequisites Access to the OpenShift Container Platform web console or command-line interface (CLI). Procedure Create the openshift-local-storage project: USD oc adm new-project openshift-local-storage Optional: Allow local storage creation on infrastructure nodes. You might want to use the Local Storage Operator to create volumes on infrastructure nodes in support of components such as logging and monitoring. You must adjust the default node selector so that the Local Storage Operator includes the infrastructure nodes, and not just worker nodes. To block the Local Storage Operator from inheriting the cluster-wide default selector, enter the following command: USD oc annotate namespace openshift-local-storage openshift.io/node-selector='' Optional: Allow local storage to run on the management pool of CPUs in single-node deployment. Use the Local Storage Operator in single-node deployments and allow the use of CPUs that belong to the management pool. Perform this step on single-node installations that use management workload partitioning. To allow Local Storage Operator to run on the management CPU pool, run following commands: USD oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management' From the UI To install the Local Storage Operator from the web console, follow these steps: Log in to the OpenShift Container Platform web console. Navigate to Operators OperatorHub . Type Local Storage into the filter box to locate the Local Storage Operator. Click Install . On the Install Operator page, select A specific namespace on the cluster . Select openshift-local-storage from the drop-down menu. Adjust the values for Update Channel and Approval Strategy to the values that you want. Click Install . Once finished, the Local Storage Operator will be listed in the Installed Operators section of the web console. From the CLI Install the Local Storage Operator from the CLI. Create an object YAML file to define an Operator group and subscription for the Local Storage Operator, such as openshift-local-storage.yaml : Example openshift-local-storage.yaml apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace 1 The user approval policy for an install plan. Create the Local Storage Operator object by entering the following command: USD oc apply -f openshift-local-storage.yaml At this point, the Operator Lifecycle Manager (OLM) is now aware of the Local Storage Operator. A ClusterServiceVersion (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Verify local storage installation by checking that all pods and the Local Storage Operator have been created: Check that all the required pods have been created: USD oc -n openshift-local-storage get pods Example output NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m Check the ClusterServiceVersion (CSV) YAML manifest to see that the Local Storage Operator is available in the openshift-local-storage project: USD oc get csvs -n openshift-local-storage Example output NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded After all checks have passed, the Local Storage Operator is installed successfully. 4.12.2.2. Provisioning local volumes by using the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by the Local Storage Operator. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Prerequisites The Local Storage Operator is installed. You have a local disk that meets the following conditions: It is attached to a node. It is not mounted. It does not contain partitions. Procedure Create the local volume resource. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple persistent volumes (PVs). Example: Filesystem apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: "local-sc" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 4 The volume mode, either Filesystem or Block , that defines the type of local volumes. Note A raw block volume ( volumeMode: Block ) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. 5 The file system that is created when the local volume is mounted for the first time. 6 The path containing a list of local storage devices to choose from. 7 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as /dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Note If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition. Example: Block apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: "local-sc" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6 1 The namespace where the Local Storage Operator is installed. 2 Optional: A node selector containing a list of nodes where the local storage volumes are attached. This example uses the node hostnames, obtained from oc get node . If a value is not defined, then the Local Storage Operator will attempt to find matching disks on all available nodes. 3 The name of the storage class to use when creating persistent volume objects. 4 The volume mode, either Filesystem or Block , that defines the type of local volumes. 5 The path containing a list of local storage devices to choose from. 6 Replace this value with your actual local disks filepath to the LocalVolume resource by-id , such as dev/disk/by-id/wwn . PVs are created for these local disks when the provisioner is deployed successfully. Note If you are running OpenShift Container Platform with RHEL KVM, you must assign a serial number to your VM disk. Otherwise, the VM disk can not be identified after reboot. You can use the virsh edit <VM> command to add the <serial>mydisk</serial> definition. Create the local volume resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <local-volume>.yaml Verify that the provisioner was created and that the corresponding daemon sets were created: USD oc get all -n openshift-local-storage Example output NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m Note the desired and current number of daemon set processes. A desired count of 0 indicates that the label selectors were invalid. Verify that the persistent volumes were created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m Important Editing the LocalVolume object does not change the fsType or volumeMode of existing persistent volumes because doing so might result in a destructive operation. 4.12.2.3. Provisioning local volumes without the Local Storage Operator Local volumes cannot be created by dynamic provisioning. Instead, persistent volumes can be created by defining the persistent volume (PV) in an object definition. The local volume provisioner looks for any file system or block volume devices at the paths specified in the defined resource. Important Manual provisioning of PVs includes the risk of potential data leaks across PV reuse when PVCs are deleted. The Local Storage Operator is recommended for automating the life cycle of devices when provisioning local PVs. Prerequisites Local disks are attached to the OpenShift Container Platform nodes. Procedure Define the PV. Create a file, such as example-pv-filesystem.yaml or example-pv-block.yaml , with the PersistentVolume object definition. This resource must define the nodes and paths to the local volumes. Note Do not use different storage class names for the same device. Doing so will create multiple PVs. example-pv-filesystem.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from, or a directory. You can only specify a directory with Filesystem volumeMode . Note A raw block volume ( volumeMode: block ) is not formatted with a file system. Use this mode only if any application running on the pod can use raw block devices. example-pv-block.yaml apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node 1 The volume mode, either Filesystem or Block , that defines the type of PVs. 2 The name of the storage class to use when creating PV resources. Be sure to use a storage class that uniquely identifies this set of PVs. 3 The path containing a list of local storage devices to choose from. Create the PV resource in your OpenShift Container Platform cluster. Specify the file you just created: USD oc create -f <example-pv>.yaml Verify that the local PV was created: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12h 4.12.2.4. Creating the local volume persistent volume claim Local volumes must be statically created as a persistent volume claim (PVC) to be accessed by the pod. Prerequisites Persistent volumes have been created using the local volume provisioner. Procedure Create the PVC using the corresponding storage class: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4 1 Name of the PVC. 2 The type of the PVC. Defaults to Filesystem . 3 The amount of storage available to the PVC. 4 Name of the storage class required by the claim. Create the PVC in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pvc>.yaml 4.12.2.5. Attach the local claim After a local volume has been mapped to a persistent volume claim it can be specified inside of a resource. Prerequisites A persistent volume claim exists in the same namespace. Procedure Include the defined claim in the resource spec. The following example declares the persistent volume claim inside a pod: apiVersion: v1 kind: Pod spec: # ... containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name 3 # ... 1 The name of the volume to mount. 2 The path inside the pod where the volume is mounted. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 The name of the existing persistent volume claim to use. Create the resource in the OpenShift Container Platform cluster, specifying the file you just created: USD oc create -f <local-pod>.yaml 4.12.2.6. Automating discovery and provisioning for local storage devices The Local Storage Operator automates local storage discovery and provisioning. With this feature, you can simplify installation when dynamic provisioning is not available during deployment, such as with bare metal, VMware, or AWS store instances with attached devices. Important Automatic discovery and provisioning is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important Automatic discovery and provisioning is fully supported when used to deploy Red Hat OpenShift Data Foundation on-premise or with platform-agnostic deployment. Use the following procedure to automatically discover local devices, and to automatically provision local volumes for selected devices. Warning Use the LocalVolumeSet object with caution. When you automatically provision persistent volumes (PVs) from local disks, the local PVs might claim all devices that match. If you are using a LocalVolumeSet object, make sure the Local Storage Operator is the only entity managing local devices on the node. Creating multiple instances of a LocalVolumeSet that target a node more than once is not supported. Prerequisites You have cluster administrator permissions. You have installed the Local Storage Operator. You have attached local disks to OpenShift Container Platform nodes. You have access to the OpenShift Container Platform web console and the oc command-line interface (CLI). Procedure To enable automatic discovery of local devices from the web console: Click Operators Installed Operators . In the openshift-local-storage namespace, click Local Storage . Click the Local Volume Discovery tab. Click Create Local Volume Discovery and then select either Form view or YAML view . Configure the LocalVolumeDiscovery object parameters. Click Create . The Local Storage Operator creates a local volume discovery instance named auto-discover-devices . To display a continuous list of available devices on a node: Log in to the OpenShift Container Platform web console. Navigate to Compute Nodes . Click the node name that you want to open. The "Node Details" page is displayed. Select the Disks tab to display the list of the selected devices. The device list updates continuously as local disks are added or removed. You can filter the devices by name, status, type, model, capacity, and mode. To automatically provision local volumes for the discovered devices from the web console: Navigate to Operators Installed Operators and select Local Storage from the list of Operators. Select Local Volume Set Create Local Volume Set . Enter a volume set name and a storage class name. Choose All nodes or Select nodes to apply filters accordingly. Note Only worker nodes are available, regardless of whether you filter using All nodes or Select nodes . Select the disk type, mode, size, and limit you want to apply to the local volume set, and click Create . A message displays after several minutes, indicating that the "Operator reconciled successfully." Alternatively, to provision local volumes for the discovered devices from the CLI: Create an object YAML file to define the local volume set, such as local-volume-set.yaml , as shown in the following example: apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: local-sc 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM 1 Determines the storage class that is created for persistent volumes that are provisioned from discovered devices. The Local Storage Operator automatically creates the storage class if it does not exist. Be sure to use a storage class that uniquely identifies this set of local volumes. 2 When using the local volume set feature, the Local Storage Operator does not support the use of logical volume management (LVM) devices. Create the local volume set object: USD oc apply -f local-volume-set.yaml Verify that the local persistent volumes were dynamically provisioned based on the storage class: USD oc get pv Example output NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m Note Results are deleted after they are removed from the node. Symlinks must be manually removed. 4.12.2.7. Using tolerations with Local Storage Operator pods Taints can be applied to nodes to prevent them from running general workloads. To allow the Local Storage Operator to use tainted nodes, you must add tolerations to the Pod or DaemonSet definition. This allows the created resources to run on these tainted nodes. You apply tolerations to the Local Storage Operator pod through the LocalVolume resource and apply taints to a node through the node specification. A taint on a node instructs the node to repel all pods that do not tolerate the taint. Using a specific taint that is not on other pods ensures that the Local Storage Operator pod can also run on that node. Important Taints and tolerations consist of a key, value, and effect. As an argument, it is expressed as key=value:effect . An operator allows you to leave one of these parameters empty. Prerequisites The Local Storage Operator is installed. Local disks are attached to OpenShift Container Platform nodes with a taint. Tainted nodes are expected to provision local storage. Procedure To configure local volumes for scheduling on tainted nodes: Modify the YAML file that defines the Pod and add the LocalVolume spec, as shown in the following example: apiVersion: "local.storage.openshift.io/v1" kind: "LocalVolume" metadata: name: "local-disks" namespace: "openshift-local-storage" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: "localstorage" 3 storageClassDevices: - storageClassName: "local-sc" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg 1 Specify the key that you added to the node. 2 Specify the Equal operator to require the key / value parameters to match. If operator is Exists , the system checks that the key exists and ignores the value. If operator is Equal , then the key and value must match. 3 Specify the value local of the tainted node. 4 The volume mode, either Filesystem or Block , defining the type of the local volumes. 5 The path containing a list of local storage devices to choose from. Optional: To create local persistent volumes on only tainted nodes, modify the YAML file and add the LocalVolume spec, as shown in the following example: spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists The defined tolerations will be passed to the resulting daemon sets, allowing the diskmaker and provisioner pods to be created for nodes that contain the specified taints. 4.12.2.8. Local Storage Operator Metrics OpenShift Container Platform provides the following metrics for the Local Storage Operator: lso_discovery_disk_count : total number of discovered devices on each node lso_lvset_provisioned_PV_count : total number of PVs created by LocalVolumeSet objects lso_lvset_unmatched_disk_count : total number of disks that Local Storage Operator did not select for provisioning because of mismatching criteria lso_lvset_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolumeSet object criteria lso_lv_orphaned_symlink_count : number of devices with PVs that no longer match LocalVolume object criteria lso_lv_provisioned_PV_count : total number of provisioned PVs for LocalVolume To use these metrics, be sure to: Enable support for monitoring when installing the Local Storage Operator. When upgrading to OpenShift Container Platform 4.9 or later, enable metric support manually by adding the operator-metering=true label to the namespace. For more information about metrics, see Accessing metrics as an administrator . 4.12.2.9. Deleting the Local Storage Operator resources 4.12.2.9.1. Removing a local volume or local volume set Occasionally, local volumes and local volume sets must be deleted. While removing the entry in the resource and deleting the persistent volume is typically enough, if you want to reuse the same device path or have it managed by a different storage class, then additional steps are needed. Note The following procedure outlines an example for removing a local volume. The same procedure can also be used to remove symlinks for a local volume set custom resource. Prerequisites The persistent volume must be in a Released or Available state. Warning Deleting a persistent volume that is still in use can result in data loss or corruption. Procedure Edit the previously created local volume to remove any unwanted disks. Edit the cluster resource: USD oc edit localvolume <name> -n openshift-local-storage Navigate to the lines under devicePaths , and delete any representing unwanted disks. Delete any persistent volumes created. USD oc delete pv <pv-name> Delete directory and included symlinks on the node. Warning The following step involves accessing a node as the root user. Modifying the state of the node beyond the steps in this procedure could result in cluster instability. USD oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1 1 The name of the storage class used to create the local volumes. 4.12.2.9.2. Uninstalling the Local Storage Operator To uninstall the Local Storage Operator, you must remove the Operator and all created resources in the openshift-local-storage project. Warning Uninstalling the Local Storage Operator while local storage PVs are still in use is not recommended. While the PVs will remain after the Operator's removal, there might be indeterminate behavior if the Operator is uninstalled and reinstalled without removing the PVs and local storage resources. Prerequisites Access to the OpenShift Container Platform web console. Procedure Delete any local volume resources installed in the project, such as localvolume , localvolumeset , and localvolumediscovery : USD oc delete localvolume --all --all-namespaces USD oc delete localvolumeset --all --all-namespaces USD oc delete localvolumediscovery --all --all-namespaces Uninstall the Local Storage Operator from the web console. Log in to the OpenShift Container Platform web console. Navigate to Operators Installed Operators . Type Local Storage into the filter box to locate the Local Storage Operator. Click the Options menu at the end of the Local Storage Operator. Click Uninstall Operator . Click Remove in the window that appears. The PVs created by the Local Storage Operator will remain in the cluster until deleted. After these volumes are no longer in use, delete them by running the following command: USD oc delete pv <pv-name> Delete the openshift-local-storage project: USD oc delete project openshift-local-storage 4.12.3. Persistent storage using hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. Most pods will not need a hostPath volume, but it does offer a quick option for testing should an application require it. Important The cluster administrator must configure pods to run as privileged. This grants access to pods in the same node. 4.12.3.1. Overview OpenShift Container Platform supports hostPath mounting for development and testing on a single-node cluster. In a production cluster, you would not use hostPath. Instead, a cluster administrator would provision a network resource, such as a GCE Persistent Disk volume, an NFS share, or an Amazon EBS volume. Network resources support the use of storage classes to set up dynamic provisioning. A hostPath volume must be provisioned statically. Important Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged. It is safe to mount the host by using /host . The following example shows the / directory from the host being mounted into the container at /host . apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi9/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: '' 4.12.3.2. Statically provisioning hostPath volumes A pod that uses a hostPath volume must be referenced by manual (static) provisioning. Procedure Define the persistent volume (PV) by creating a pv.yaml file with the PersistentVolume object definition: apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: "/mnt/data" 4 1 The name of the volume. This name is how the volume is identified by persistent volume (PV) claims or pods. 2 Used to bind persistent volume claim (PVC) requests to the PV. 3 The volume can be mounted as read-write by a single node. 4 The configuration file specifies that the volume is at /mnt/data on the cluster's node. To avoid corrupting your host system, do not mount to the container root, / , or any path that is the same in the host and the container. You can safely mount the host by using /host Create the PV from the file: USD oc create -f pv.yaml Define the PVC by creating a pvc.yaml file with the PersistentVolumeClaim object definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual Create the PVC from the file: USD oc create -f pvc.yaml 4.12.3.3. Mounting the hostPath share in a privileged pod After the persistent volume claim has been created, it can be used inside by an application. The following example demonstrates mounting this share inside of a pod. Prerequisites A persistent volume claim exists that is mapped to the underlying hostPath share. Procedure Create a privileged pod that mounts the existing persistent volume claim: apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: ... securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged ... securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4 1 The name of the pod. 2 The pod must run as privileged to access the node's storage. 3 The path to mount the host path share inside the privileged pod. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 4 The name of the PersistentVolumeClaim object that has been previously created. 4.12.4. Persistent storage using Logical Volume Manager Storage Logical Volume Manager (LVM) Storage uses Logical Volume Manager (LVM2) through the TopoLVM Container Storage Interface (CSI) driver to dynamically provision local storage on a cluster with limited resources. You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage. 4.12.4.1. Logical Volume Manager Storage installation You can install Logical Volume Manager (LVM) Storage on a single-node OpenShift cluster and configure it to dynamically provision storage for your workloads. You can deploy LVM Storage on single-node OpenShift clusters by using the OpenShift Container Platform CLI ( oc ), OpenShift Container Platform web console, or Red Hat Advanced Cluster Management (RHACM). 4.12.4.1.1. Prerequisites to install LVM Storage The prerequisites to install LVM Storage are as follows: Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM. Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them. Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention. Note You cannot wipe the disks that are in use. If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. For more information, see "Installing LVM Storage by using RHACM". Additional resources Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online Installing LVM Storage by using RHACM 4.12.4.1.2. Installing LVM Storage by using the CLI As a cluster administrator, you can install Logical Volume Manager (LVM) Storage by using the OpenShift CLI ( oc ). Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to OpenShift Container Platform as a user with cluster-admin and Operator installation permissions. Procedure Create a YAML file and add the configuration for creating a namespace. Example YAML configuration for creating a namespace apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage Create the namespace by running the following command: USD oc create -f <file_name> Create an OperatorGroup custom resource (CR) YAML file. Example OperatorGroup CR apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage Create the OperatorGroup CR by running the following command: USD oc create -f <file_name> Create a Subscription CR YAML file. Example Subscription CR apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace Create the Subscription CR by running the following command: USD oc create -f <file_name> Verification To verify that LVM Storage is installed, run the following command: USD oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase 4.13.0-202301261535 Succeeded 4.12.4.1.3. Installing LVM Storage by using the web console You can install Logical Volume Manager (LVM) Storage by using the OpenShift Container Platform web console. Prerequisites You have access to the single-node OpenShift cluster. You have access to OpenShift Container Platform with cluster-admin and Operator installation permissions. Procedure Log in to the OpenShift Container Platform web console. Click Operators OperatorHub . Click LVM Storage on the OperatorHub page. Set the following options on the Operator Installation page: Update Channel as stable-4.14 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If the openshift-storage namespace does not exist, it is created during the operator installation. Update approval as Automatic or Manual . Note If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically updates the running instance of LVM Storage without any intervention. If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must manually approve the update request to update LVM Storage to a newer version. Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox. Click Install . Verification steps Verify that LVM Storage shows a green tick, indicating successful installation. 4.12.4.1.4. Installing LVM Storage in a disconnected environment You can install Logical Volume Manager (LVM) Storage on OpenShift Container Platform 4.14 in a disconnected environment. All sections referenced in this procedure are linked in the "Additional resources" section. Prerequisites You read the "About disconnected installation mirroring" section. You have access to the OpenShift Container Platform image repository. You created a mirror registry. Procedure Follow the steps in the "Creating the image set configuration" procedure. To create an image set configuration for LVM Storage, you can use the following example ImageSetConfiguration object configuration: Example ImageSetConfiguration file for LVM Storage kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.14 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 6 packages: - name: lvms-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {} 1 Set the maximum size (in gibibytes) of each file within the image set. 2 Specify the location in which you want to save the image set. This location can be a registry or a local directory. 3 Specify the storage URL for the image stream when using a registry. For more information, see "Why use imagestreams". 4 Specify the channel from which you want to retrieve the OpenShift Container Platform images. 5 Set this field to true to generate the OpenShift Update Service (OSUS) graph image. For more information, see "About the OpenShift Update Service". 6 Specify the Operator catalog from which you want to retrieve the OpenShift Container Platform images. 7 Specify the Operator packages to include in the image set. If this field is empty, all packages in the catalog are retrieved. 8 Specify the channels of the Operator packages to include in the image set. You must include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command: USD oc mirror list operators --catalog=<catalog_name> --package=<package_name> . 9 Specify any additional images to include in the image set. Follow the procedure in the "Mirroring an image set to a mirror registry" section. Follow the procedure in the "Configuring image registry repository mirroring" section. Additional resources About disconnected installation mirroring Creating a mirror registry with mirror registry for Red Hat OpenShift Mirroring the OpenShift Container Platform image repository Creating the image set configuration Mirroring an image set to a mirror registry Configuring image registry repository mirroring Image set configuration parameters Why use imagestreams About the OpenShift Update Service 4.12.4.1.5. Installing LVM Storage by using RHACM To install Logical Volume Manager (LVM) Storage on the clusters by using Red Hat Advanced Cluster Management (RHACM), you must create a Policy custom resource (CR). You can also configure the criteria to select the clusters on which you want to install LVM Storage. Note The Policy CR that is created to install LVM Storage is also applied to the clusters that are imported or created after creating the Policy CR. Prerequisites You have access to the RHACM cluster using an account with cluster-admin and Operator installation permissions. You have dedicated disks that LVM Storage can use on each cluster. The cluster must be managed by RHACM. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Create a namespace by running the following command: USD oc create ns <namespace> Create a Policy CR YAML file. Example Policy CR to install and configure LVM Storage apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: 1 matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: 2 apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave objectDefinition: 3 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: musthave objectDefinition: 4 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low 1 Set the key field and values field in PlacementRule.spec.clusterSelector to match the labels that are configured in the clusters on which you want to install LVM Storage. 2 The namespace configuration. 3 The OperatorGroup CR configuration. 4 The Subscription CR configuration. Create the Policy CR by running the following command: USD oc create -f <file_name> -n <namespace> Upon creating the Policy CR, the following custom resources are created on the clusters that match the selection criteria configured in the PlacementRule CR: Namespace OperatorGroup Subscription Additional resources Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online 4.12.4.2. Limitations to configure the size of the devices used in LVM Storage The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows: The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor. The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE). You can define the size of PE and LE during the physical and logical device creation. The default PE and LE size is 4 MB. If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space. Table 4.5. Size limits for different architectures using the default PE and LE size Architecture RHEL 6 RHEL 7 RHEL 8 RHEL 9 32-bit 16 TB - - - 64-bit 8 EB [1] 100 TB [2] 8 EB [1] 500 TB [2] 8 EB 8 EB Theoretical size. Tested size. 4.12.4.3. About the LVMCluster custom resource You can configure the LVMCluster custom resource (CR) to perform the following actions: Create LVM volume groups that you can use to provision persistent volume claims (PVCs). Configure a list of devices that you want to add to the LVM volume groups. Configure the requirements to select the nodes on which you want to create an LVM volume group, and the thin pool configuration for the volume group. After you have installed LVM Storage, you must create an LVMCluster custom resource (CR). Example LVMCluster CR YAML file apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: tolerations: - effect: NoSchedule key: xyz operator: Equal value: "true" storage: deviceClasses: - name: vg1 fstype: ext4 1 default: true nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: mykey operator: In values: - ssd deviceSelector: 3 paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 4 overprovisionRatio: 10 1 2 3 4 Optional field Explanation of fields in the LVMCluster CR The LVMCluster CR fields are described in the following table: Table 4.6. LVMCluster CR fields Field Type Description spec.storage.deviceClasses array Contains the configuration to assign the local storage devices to the LVM volume groups. LVM Storage creates a storage class and volume snapshot class for each device class that you create. If you add or remove a device class, the update reflects in the cluster only after deleting and recreating the topolvm-node pod. deviceClasses.name string Specify a name for the LVM volume group (VG). deviceClasses.fstype string Set this field to ext4 or xfs . By default, this field is set to xfs . deviceClasses.default boolean Set this field to true to indicate that a device class is the default. Otherwise, you can set it to false . You can only configure a single default device class. deviceClasses.nodeSelector object Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered. On the control-plane node, LVM Storage detects and uses the additional worker nodes when the new nodes become active in the cluster. nodeSelector.nodeSelectorTerms array Configure the requirements that are used to select the node. deviceClasses.deviceSelector object Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. For more information, see "About adding devices to a volume group". deviceSelector.paths array Specify the device paths. If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state. deviceSelector.optionalPaths array Specify the optional device paths. If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error. deviceClasses.thinPoolConfig object Contains the configuration to create a thin pool in the LVM volume group. thinPoolConfig.name string Specify a name for the thin pool. thinPoolConfig.sizePercent integer Specify the percentage of space in the LVM volume group for creating the thin pool. By default, this field is set to 90. The minimum value that you can set is 10, and the maximum value is 90. thinPoolConfig.overprovisionRatio integer Specify a factor by which you can provision additional storage based on the available storage in the thin pool. For example, if this field is set to 10, you can provision up to 10 times the amount of available storage in the thin pool. To disable over-provisioning, set this field to 1. Additional resources About adding devices to a volume group Adding worker nodes to single-node OpenShift clusters 4.12.4.3.1. About adding devices to a volume group The deviceSelector field in the LVMCluster custom resource (CR) contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. You can specify the device paths in the deviceSelector.paths field, the deviceSelector.optionalPaths field, or both. If you do not specify the device paths in both the deviceSelector.paths field and the deviceSelector.optionalPaths field, LVM Storage adds the unused devices to the LVM volume group. Warning It is recommended to avoid referencing disks using symbolic naming, such as /dev/sdX , as these names may change across reboots within RHCOS. Instead, you must use stable naming schemes, such as /dev/disk/by-path/ or /dev/disk/by-id/ , to ensure consistent disk identification. With this change, you might need to adjust existing automation workflows in the cases where monitoring collects information about the install device for each node. For more information, see the RHEL documentation . If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available. LVM Storage adds the devices to the LVM volume group only if the device path exists. Important After a device is added to the LVM volume group, it cannot be removed. 4.12.4.4. Ways to create an LVMCluster custom resource You can create an LVMCluster custom resource (CR) by using the OpenShift CLI ( oc ) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also create an LVMCluster CR by using RHACM. Upon creating the LVMCluster CR, LVM Storage creates the following system-managed CRs: A storageClass and volumeSnapshotClass for each device class. Note LVM Storage configures the name of the storage class and volume snapshot class in the format lvms-<device_class_name> , where, <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1 , the name of the storage class and volume snapshot class is lvms-vg1 . LVMVolumeGroup : This CR is a specific type of persistent volume (PV) that is backed by an LVM volume group. It tracks the individual volume groups across multiple nodes. LVMVolumeGroupNodeStatus : This CR tracks the status of the volume groups on a node. 4.12.4.4.1. Creating an LVMCluster CR by using the CLI You can create an LVMCluster custom resource (CR) on a worker node using the OpenShift CLI ( oc ). Important You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to OpenShift Container Platform as a user with cluster-admin privileges. You have installed LVM Storage. You have installed a worker node in the cluster. You read the "About the LVMCluster custom resource" section. Procedure Create an LVMCluster custom resource (CR) YAML file: Example LVMCluster CR YAML file apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: # ... storage: deviceClasses: 1 # ... nodeSelector: 2 # ... deviceSelector: 3 # ... thinPoolConfig: 4 # ... 1 Contains the configuration to assign the local storage devices to the LVM volume groups. 2 Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered. 3 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. 4 Contains the configuration to create a thin pool in the LVM volume group. Create the LVMCluster CR by running the following command: USD oc create -f <file_name> Example output lvmcluster/lvmcluster created Verification Check that the LVMCluster CR is in the Ready state: USD oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status.state}' -n <namespace> Example output {"deviceClassStatuses": 1 [ { "name": "vg1", "nodeStatus": [ 2 { "devices": [ 3 "/dev/nvme0n1", "/dev/nvme1n1", "/dev/nvme2n1" ], "node": "kube-node", 4 "status": "Ready" 5 } ] } ] "state":"Ready"} 6 1 The status of the device class. 2 The status of the LVM volume group on each node. 3 The list of devices used to create the LVM volume group. 4 The node on which the device class is created. 5 The status of the LVM volume group on the node. 6 The status of the LVMCluster CR. Note If the LVMCluster CR is in the Failed state, you can view the reason for failure in the status field. Example status field with the reason for failure: status: deviceClassStatuses: - name: vg1 nodeStatus: - node: my-node-1.example.com reason: no available devices found for volume group status: Failed state: Failed Optional: To view the storage classes created by LVM Storage for each device class, run the following command: USD oc get storageclass Example output NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m Optional: To view the volume snapshot classes created by LVM Storage for each device class, run the following command: USD oc get volumesnapshotclass Example output NAME DRIVER DELETIONPOLICY AGE lvms-vg1 topolvm.io Delete 24h Additional resources About the LVMCluster custom resource 4.12.4.4.2. Creating an LVMCluster CR by using the web console You can create an LVMCluster CR on a worker node using the OpenShift Container Platform web console. Important You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster. Prerequisites You have access to the OpenShift Container Platform cluster with cluster-admin privileges. You have installed LVM Storage. You have installed a worker node in the cluster. You read the "About the LVMCluster custom resource" section. Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . In the openshift-storage namespace, click LVM Storage . Click Create LVMCluster and select either Form view or YAML view . Configure the required LVMCluster CR parameters. Click Create . Optional: If you want to edit the LVMCLuster CR, perform the following actions: Click the LVMCluster tab. From the Actions menu, select Edit LVMCluster . Click YAML and edit the required LVMCLuster CR parameters. Click Save . Verification On the LVMCLuster page, check that the LVMCluster CR is in the Ready state. Optional: To view the available storage classes created by LVM Storage for each device class, click Storage StorageClasses . Optional: To view the available volume snapshot classes created by LVM Storage for each device class, click Storage VolumeSnapshotClasses . Additional resources About the LVMCluster custom resource 4.12.4.4.3. Creating an LVMCluster CR by using RHACM After you have installed Logical Volume Manager (LVM) Storage by using RHACM, you must create an LVMCluster custom resource (CR). Prerequisites You have installed LVM Storage by using RHACM. You have access to the RHACM cluster using an account with cluster-admin permissions. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Create a ConfigurationPolicy CR YAML file with the configuration to create an LVMCluster CR. Example ConfigurationPolicy CR YAML file to create an LVMCluster CR apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms namespace: openshift-storage spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 # ... deviceSelector: 2 # ... thinPoolConfig: 3 # ... nodeSelector: 4 # ... remediationAction: enforce severity: low 1 Contains the configuration to assign the local storage devices to the LVM volume groups. 2 Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group. 3 Contains the configuration to create a thin pool in the LVM volume group. 4 Contains the configuration to choose the nodes on which you want to create the LVM volume groups. If this field is empty, then all nodes without no-schedule taints are considered. Create the ConfigurationPolicy CR by running the following command: USD oc create -f <file_name> -n <cluster_namespace> 1 1 Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed. Additional resources About the LVMCluster custom resource 4.12.4.5. Ways to delete an LVMCluster custom resource You can delete an LVMCluster custom resource (CR) by using the OpenShift CLI ( oc ) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also delete an LVMCluster CR by using RHACM. Upon deleting the LVMCluster CR, LVM Storage deletes the following CRs: storageClass volumeSnapshotClass LVMVolumeGroup LVMVolumeGroupNodeStatus 4.12.4.5.1. Deleting an LVMCluster CR by using the CLI You can delete the LVMCluster custom resource (CR) using the OpenShift CLI ( oc ). Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. Procedure Log in to the OpenShift CLI ( oc ). Delete the LVMCluster CR by running the following command: USD oc delete lvmcluster <lvmclustername> -n openshift-storage Verification To verify that the LVMCluster CR has been deleted, run the following command: USD oc get lvmcluster -n <namespace> Example output No resources found in openshift-storage namespace. 4.12.4.5.2. Deleting an LVMCluster CR by using the web console You can delete the LVMCluster custom resource (CR) using the OpenShift Container Platform web console. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators to view all the installed Operators. Click LVM Storage in the openshift-storage namespace. Click the LVMCluster tab. From the Actions , select Delete LVMCluster . Click Delete . Verification On the LVMCLuster page, check that the LVMCluster CR has been deleted. 4.12.4.5.3. Deleting an LVMCluster CR by using RHACM If you have installed Logical Volume Manager (LVM) Storage by using Red Hat Advanced Cluster Management (RHACM), you can delete an LVMCluster custom resource (CR) by using RHACM. Prerequisites You have access to the RHACM cluster as a user with cluster-admin permissions. You have deleted the following resources provisioned by LVM Storage: Persistent volume claims (PVCs) Volume snapshots Volume clones You have also deleted any applications that are using these resources. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Delete the ConfigurationPolicy CR for the LVMCluster CR by running the following command: USD oc delete -f <file_name> -n <cluster_namespace> 1 1 Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed. Create a Policy CR YAML file to delete the LVMCluster CR. Example Policy CR to delete the LVMCluster CR apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-delete annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal spec: remediationAction: enforce 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-delete placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-delete subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-delete --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-delete spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: 3 matchExpressions: - key: mykey operator: In values: - myvalue 1 The spec.remediationAction in policy-template is overridden by the preceding parameter value for spec.remediationAction . 2 This namespace field must have the openshift-storage value. 3 Configure the requirements to select the clusters. LVM Storage is uninstalled on the clusters that match the selection criteria. Create the Policy CR by running the following command: USD oc create -f <file_name> -n <namespace> Create a Policy CR YAML file to check if the LVMCluster CR has been deleted. Example Policy CR to check if the LVMCluster CR has been deleted apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-inform annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal-inform spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-check placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-check subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-inform --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-check spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue 1 The policy-template spec.remediationAction is overridden by the preceding parameter value for spec.remediationAction . 2 The namespace field must have the openshift-storage value. Create the Policy CR by running the following command: USD oc create -f <file_name> -n <namespace> Verification Check the status of the Policy CRs by running the following command: USD oc get policy -n <namespace> Example output NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15m Important The Policy CRs must be in Compliant state. 4.12.4.6. Provisioning storage After you have created the LVM volume groups using the LVMCluster custom resource (CR), you can provision the storage by creating persistent volume claims (PVCs). To create a PVC, you must create a PersistentVolumeClaim object. Prerequisites You have created an LVMCluster CR. Procedure Log in to the OpenShift CLI ( oc ). Create a PersistentVolumeClaim object similar to the following: Example PersistentVolumeClaim object apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-block-1 1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Block 2 resources: requests: storage: 10Gi 3 storageClassName: lvms-vg1 4 1 Specify a name for the PVC. 2 To create a block PVC, set this field to Block . To create a file PVC, set this field to Filesystem . 3 Specify the storage size. Logical Volume Manager (LVM) Storage provisions PVCs in units of 1 GiB (gibibytes). The requested storage is rounded up to the nearest GiB. The total storage size you can provision is limited by the size of the LVM thin pool and the overprovisioning factor. 4 The value of the storageClassName field must be in the format lvms-<device_class_name> where <device_class_name> is the value of the deviceClasses.name field in the LVMCluster CR. For example, if the deviceClasses.name field is set to vg1 , you must set the storageClassName field to lvms-vg1 . Note The volumeBindingMode field of the storage class is set to WaitForFirstConsumer . Create the PVC by running the following command: USD oc create -f <file_name> -n <application_namespace> Note The created PVCs remain in Pending state until you deploy the workloads that use them. Verification To verify that the PVC is created, run the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1 Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s 4.12.4.7. Ways to scale up the storage of a single-node OpenShift cluster You can scale up the storage of a single-node OpenShift cluster by adding new devices to the existing node. To add a new device to the existing node on a single-node OpenShift cluster, you must add the path to the new device in the deviceSelector field of the LVMCluster custom resource (CR). Important You can add the deviceSelector field in the LVMCluster CR only while creating the LVMCluster CR. If you have not added the deviceSelector field while creating the LVMCluster CR, you must delete the LVMCluster CR and create a new LVMCluster CR containing the deviceSelector field. If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available. Additional resources Adding worker nodes to single-node OpenShift clusters 4.12.4.7.1. Scaling up the storage of a single-node OpenShift cluster by using the CLI You can scale up the storage capacity of the existing node on a single-node OpenShift cluster by using the OpenShift CLI ( oc ). Prerequisites You have additional unused devices on the single-node OpenShift cluster to be used by Logical Volume Manager (LVM) Storage. You have installed the OpenShift CLI ( oc ). You have created an LVMCluster custom resource (CR). Procedure Edit the LVMCluster CR by running the following command: USD oc edit <lvmcluster_file_name> -n <namespace> Add the path to the new device in the deviceSelector field: Example LVMCluster CR apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: # ... deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 # ... 1 Contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths , LVM Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the device path exists. 2 Specify the device paths. If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state. 3 Specify the optional device paths. If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error. Important After a device is added to the LVM volume group, it cannot be removed. Save the LVMCluster CR. Additional resources About the LVMCluster custom resource 4.12.4.7.2. Scaling up the storage of a single-node OpenShift cluster by using the web console You can scale up the storage capacity of the existing node on a single-node OpenShift cluster by using the OpenShift Container Platform web console. Prerequisites You have additional unused devices on the single-node OpenShift cluster to be used by Logical Volume Manager (LVM) Storage. You have created an LVMCluster custom resource (CR). Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click LVM Storage in the openshift-storage namespace. Click the LVMCluster tab to view the LVMCluster CR created on the cluster. From the Actions menu, select Edit LVMCluster . Click the YAML tab. Edit the LVMCluster CR to add the new device path in the deviceSelector field: Example LVMCluster CR apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: # ... deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 # ... 1 Contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths , LVM Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the device path exists. 2 Specify the device paths. If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state. 3 Specify the optional device paths. If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error. Important After a device is added to the LVM volume group, it cannot be removed. Click Save . Additional resources About the LVMCluster custom resource 4.12.4.7.3. Scaling up the storage of single-node OpenShift clusters by using RHACM You can scale up the storage capacity of the existing node on single-node OpenShift clusters by using RHACM. Prerequisites You have access to the RHACM cluster using an account with cluster-admin privileges. You have created an LVMCluster custom resource (CR) by using RHACM. You have additional unused devices on each single-node OpenShift cluster to be used by Logical Volume Manager (LVM) Storage. Procedure Log in to the RHACM CLI using your OpenShift Container Platform credentials. Edit the LVMCluster CR that you created using RHACM by running the following command: USD oc edit -f <file_name> -ns <namespace> 1 1 Replace <file_name> with the name of the LVMCluster CR. In the LVMCluster CR, add the path to the new device in the deviceSelector field. Example LVMCluster CR: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: # ... deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 # ... 1 Contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group. You can specify the device paths in the paths field, the optionalPaths field, or both. If you do not specify the device paths in both paths and optionalPaths , LVM Storage adds the unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the device path exists. 2 Specify the device paths. If the device path specified in this field does not exist, the LVMCluster CR moves to the Failed state. 3 Specify the optional device paths. If the device path specified in this field does not exist, LVM Storage ignores the device without causing an error. Important After a device is added to the LVM volume group, it cannot be removed. Save the LVMCluster CR. Additional resources Red Hat Advanced Cluster Management for Kubernetes: Installing while connected online About the LVMCluster custom resource 4.12.4.8. Expanding a persistent volume claim After scaling up the storage of a cluster, you can expand the existing persistent volume claims (PVCs). To expand a PVC, you must update the requests.storage field in the PVC. Prerequisites Dynamic provisioning is used. The StorageClass object associated with the PVC has the allowVolumeExpansion field set to true . Procedure Log in to the OpenShift CLI ( oc ). Update the value of the spec.resources.requests.storage field to a value that is greater than the current value by running the following command: USD oc patch pvc <pvc_name> -n <application_namespace> -p \ 1 '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}' --type=merge 2 1 Replace <pvc_name> with the name of the PVC that you want to expand. 2 Replace <desired_size> with the new size to expand the PVC. Verification To verify that resizing is completed, run the following command: USD oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage} Logical Volume Manager (LVM) Storage adds the Resizing condition to the PVC during expansion. It deletes the Resizing condition after the PVC expansion. Additional resources Ways to scale up the storage of clusters Enabling volume expansion support 4.12.4.9. Deleting a persistent volume claim You can delete a persistent volume claim (PVC) by using the OpenShift CLI ( oc ). Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. Procedure Log in to the OpenShift CLI ( oc ). Delete the PVC by running the following command: USD oc delete pvc <pvc_name> -n <namespace> Verification To verify that the PVC is deleted, run the following command: USD oc get pvc -n <namespace> The deleted PVC must not be present in the output of this command. 4.12.4.10. About volume snapshots You can create snapshots of persistent volume claims (PVCs) that are provisioned by LVM Storage. You can perform the following actions using the volume snapshots: Back up your application data. Important Volume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you must move the snapshots to a secure location. You can use OpenShift API for Data Protection (OADP) backup and restore solutions. For information on OADP, see "OADP features". Revert to a state at which the volume snapshot was taken. Note You can also create volume snapshots of volume clones. Additional resources OADP features 4.12.4.10.1. Creating volume snapshots You can create volume snapshots based on the available capacity of the thin pool and the over-provisioning limits. To create a volume snapshot, you must create a VolumeSnapshot object. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You ensured that the persistent volume claim (PVC) is in Bound state. This is required for a consistent snapshot. You stopped all the I/O to the PVC. Procedure Log in to the OpenShift CLI ( oc ). Create a VolumeSnapshot object: Example VolumeSnapshot object apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: lvm-block-1-snap 1 spec: source: persistentVolumeClaimName: lvm-block-1 2 volumeSnapshotClassName: lvms-vg1 3 1 Specify a name for the volume snapshot. 2 Specify the name of the source PVC. LVM Storage creates a snapshot of this PVC. 3 Set this field to the name of a volume snapshot class. Note To get the list of available volume snapshot classes, run the following command: USD oc get volumesnapshotclass Create the volume snapshot in the namespace where you created the source PVC by running the following command: USD oc create -f <file_name> -n <namespace> LVM Storage creates a read-only copy of the PVC as a volume snapshot. Verification To verify that the volume snapshot is created, run the following command: USD oc get volumesnapshot -n <namespace> Example output NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE lvm-block-1-snap true lvms-test-1 1Gi lvms-vg1 snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91 19s 19s The value of the READYTOUSE field for the volume snapshot that you created must be true . 4.12.4.10.2. Restoring volume snapshots To restore a volume snapshot, you must create a persistent volume claim (PVC) with the dataSource.name field set to the name of the volume snapshot. The restored PVC is independent of the volume snapshot and the source PVC. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have created a volume snapshot. Procedure Log in to the OpenShift CLI ( oc ). Create a PersistentVolumeClaim object with the configuration to restore the volume snapshot: Example PersistentVolumeClaim object to restore a volume snapshot kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-block-1-restore spec: accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi 1 storageClassName: lvms-vg1 2 dataSource: name: lvm-block-1-snap 3 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io 1 Specify the storage size of the PVC. The storage size of the requested PVC must be greater than or equal to the stoage size of the volume snapshot that you want to restore. If a larger PVC is required, you can also resize the PVC after restoring the volume snapshot. 2 Set this field to the value of the storageClassName field in the source PVC of the volume snapshot that you want to restore. 3 Set this field to the name of the volume snapshot that you want to restore. Create the PVC in the namespace where you created the volume snapshot by running the following command: USD oc create -f <file_name> -n <namespace> Verification To verify that the volume snapshot is restored, create a workload using the restored PVC and then run the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-restore Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s 4.12.4.10.3. Deleting volume snapshots You can delete the volume snapshots of the persistent volume claims (PVCs). Important When you delete a persistent volume claim (PVC), LVM Storage deletes only the PVC, but not the snapshots of the PVC. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. You have ensured that the volume snpashot that you want to delete is not in use. Procedure Log in to the OpenShift CLI ( oc ). Delete the volume snapshot by running the following command: USD oc delete volumesnapshot <volume_snapshot_name> -n <namespace> Verification To verify that the volume snapshot is deleted, run the following command: USD oc get volumesnapshot -n <namespace> The deleted volume snapshot must not be present in the output of this command. 4.12.4.11. About volume clones A volume clone is a duplicate of an existing persistent volume claim (PVC). You can create a volume clone to make a point-in-time copy of the data. 4.12.4.11.1. Creating volume clones To create a clone of a persistent volume claim (PVC), you must create a PersistentVolumeClaim object in the namespace where you created the source PVC. Important The cloned PVC has write access. Prerequisites You ensured that the source PVC is in Bound state. This is required for a consistent clone. Procedure Log in to the OpenShift CLI ( oc ). Create a PersistentVolumeClaim object: Example PersistentVolumeClaim object to create a volume clone kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-pvc-clone spec: accessModes: - ReadWriteOnce storageClassName: lvms-vg1 1 volumeMode: Filesystem 2 dataSource: kind: PersistentVolumeClaim name: lvm-pvc 3 resources: requests: storage: 1Gi 4 1 Set this field to the value of the storageClassName field in the source PVC. 2 Set this field to the volumeMode field in the source PVC. 3 Specify the name of the source PVC. 4 Specify the storage size for the cloned PVC. The storage size of the cloned PVC must be greater than or equal to the storage size of the source PVC. Create the PVC in the namespace where you created the source PVC by running the following command: USD oc create -f <file_name> -n <namespace> Verification To verify that the volume clone is created, create a workload using the cloned PVC and then run the following command: USD oc get pvc -n <namespace> Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-clone Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s 4.12.4.11.2. Deleting volume clones You can delete volume clones. Important When you delete a persistent volume claim (PVC), LVM Storage deletes only the source persistent volume claim (PVC) but not the clones of the PVC. Prerequisites You have access to OpenShift Container Platform as a user with cluster-admin permissions. Procedure Log in to the OpenShift CLI ( oc ). Delete the cloned PVC by running the following command: # oc delete pvc <clone_pvc_name> -n <namespace> Verification To verify that the volume clone is deleted, run the following command: USD oc get pvc -n <namespace> The deleted volume clone must not be present in the output of this command. 4.12.4.12. Updating LVM Storage on a single-node OpenShift cluster You can update LVM Storage to ensure compatibility with the single-node OpenShift version. Prerequisites You have updated your single-node OpenShift cluster. You have installed a version of LVM Storage. You have installed the OpenShift CLI ( oc ). You have access to the cluster using an account with cluster-admin permissions. Procedure Log in to the OpenShift CLI ( oc ). Update the Subscription custom resource (CR) that you created while installing LVM Storage by running the following command: USD oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{"spec":{"channel":"<update_channel>"}}' 1 1 Replace <update_channel> with the version of LVM Storage that you want to install. For example, stable-4.14 . View the update events to check that the installation is complete by running the following command: USD oc get events -n openshift-storage Example output ... 8m13s Normal RequirementsUnknown clusterserviceversion/lvms-operator.v4.14 requirements not yet checked 8m11s Normal RequirementsNotMet clusterserviceversion/lvms-operator.v4.14 one or more requirements couldn't be found 7m50s Normal AllRequirementsMet clusterserviceversion/lvms-operator.v4.14 all requirements found, attempting install 7m50s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.14 waiting for install components to report healthy 7m49s Normal InstallWaiting clusterserviceversion/lvms-operator.v4.14 installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" waiting for 1 outdated replica(s) to be terminated 7m39s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.14 install strategy completed with no errors ... Verification Verify the LVM Storage version by running the following command: USD oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}' Example output lvms-operator.v4.14 4.12.4.13. Monitoring LVM Storage To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage: openshift.io/cluster-monitoring=true Important For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics . 4.12.4.13.1. Metrics You can monitor LVM Storage by viewing the metrics. The following table describes the topolvm metrics: Table 4.7. topolvm metrics Alert Description topolvm_thinpool_data_percent Indicates the percentage of data space used in the LVM thinpool. topolvm_thinpool_metadata_percent Indicates the percentage of metadata space used in the LVM thinpool. topolvm_thinpool_size_bytes Indicates the size of the LVM thin pool in bytes. topolvm_volumegroup_available_bytes Indicates the available space in the LVM volume group in bytes. topolvm_volumegroup_size_bytes Indicates the size of the LVM volume group in bytes. topolvm_thinpool_overprovisioned_available Indicates the available over-provisioned size of the LVM thin pool in bytes. Note Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool. 4.12.4.13.2. Alerts When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss. LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value: Table 4.8. LVM Storage alerts Alert Description VolumeGroupUsageAtThresholdNearFull This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required. VolumeGroupUsageAtThresholdCritical This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required. ThinPoolDataUsageAtThresholdNearFull This alert is triggered when the thin pool data usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. ThinPoolDataUsageAtThresholdCritical This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. ThinPoolMetaDataUsageAtThresholdNearFull This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. ThinPoolMetaDataUsageAtThresholdCritical This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. 4.12.4.14. Uninstalling LVM Storage by using the CLI You can uninstall LVM Storage by using the OpenShift CLI ( oc ). Prerequisites You have logged in to oc as a user with cluster-admin permissions. You deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. You deleted the LVMCluster custom resource (CR). Procedure Get the currentCSV value for the LVM Storage Operator by running the following command: USD oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV Example output currentCSV: lvms-operator.v4.15.3 Delete the subscription by running the following command: USD oc delete subscription.operators.coreos.com lvms-operator -n <namespace> Example output subscription.operators.coreos.com "lvms-operator" deleted Delete the CSV for the LVM Storage Operator in the target namespace by running the following command: USD oc delete clusterserviceversion <currentCSV> -n <namespace> 1 1 Replace <currentCSV> with the currentCSV value for the LVM Storage Operator. Example output clusterserviceversion.operators.coreos.com "lvms-operator.v4.15.3" deleted Verification To verify that the LVM Storage Operator is uninstalled, run the following command: USD oc get csv -n <namespace> If the LVM Storage Operator was successfully uninstalled, it does not appear in the output of this command. 4.12.4.15. Uninstalling LVM Storage by using the web console You can uninstall Logical Volume Manager (LVM) Storage using the OpenShift Container Platform web console. Prerequisites You have access to the single-node OpenShift cluster as a user with cluster-admin permissions. You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources. You have deleted the LVMCluster custom resource (CR). Procedure Log in to the OpenShift Container Platform web console. Click Operators Installed Operators . Click LVM Storage in the openshift-storage namespace. Click the Details tab. From the Actions menu, click Uninstall Operator . Optional: When prompted, select the Delete all operand instances for this operator checkbox to delete the operand instances for LVM Storage. Click Uninstall . 4.12.4.16. Uninstalling LVM Storage installed using RHACM To uninstall Logical Volume Manager (LVM) Storage that you installed using RHACM, you must delete the RHACM Policy custom resource (CR) that you created for installing and configuring LVM Storage. Prerequisites You have access to the RHACM cluster as a user with cluster-admin permissions. You have deleted the following resources provisioned by LVM Storage: Persistent volume claims (PVCs) Volume snapshots Volume clones You have also deleted any applications that are using these resources. You have deleted the LVMCluster CR that you created using RHACM. Procedure Log in to the OpenShift CLI ( oc ). Delete the RHACM Policy CR that you created for installing and configuring LVM Storage by running the following command: USD oc delete -f <policy> -n <namespace> 1 1 Replace <policy> with the name of the Policy CR YAML file. Create a Policy CR YAML file with the configuration to uninstall LVM Storage. Example Policy CR to uninstall LVM Storage apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-uninstall-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-uninstall-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-uninstall-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: uninstall-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: uninstall-lvms spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: uninstall-lvms spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-remove-lvms-crds spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: logicalvolumes.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmclusters.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroupnodestatuses.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroups.lvm.topolvm.io remediationAction: enforce severity: high Create the Policy CR by running the following command: USD oc create -f <policy> -ns <namespace> 4.12.4.17. Downloading log files and diagnostic information using must-gather When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution. Procedure Run the must-gather command from the client connected to the LVM Storage cluster: USD oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.14 --dest-dir=<directory_name> Additional resources About the must-gather tool 4.12.4.18. Troubleshooting persistent storage While configuring persistent storage using Logical Volume Manager (LVM) Storage, you can encounter several issues that require troubleshooting. 4.12.4.18.1. Investigating a PVC stuck in the Pending state A persistent volume claim (PVC) can get stuck in the Pending state for the following reasons: Insufficient computing resources. Network problems. Mismatched storage class or node selector. No available persistent volumes (PVs). The node with the PV is in the Not Ready state. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Retrieve the list of PVCs by running the following command: USD oc get pvc Example output NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11s Inspect the events associated with a PVC stuck in the Pending state by running the following command: USD oc describe pvc <pvc_name> 1 1 Replace <pvc_name> with the name of the PVC. For example, lvms-vg1 . Example output Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io "lvms-vg1" not found 4.12.4.18.2. Recovering from a missing storage class If you encounter the storage class not found error, check the LVMCluster custom resource (CR) and ensure that all the Logical Volume Manager (LVM) Storage pods are in the Running state. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Verify that the LVMCluster CR is present by running the following command: USD oc get lvmcluster -n openshift-storage Example output NAME AGE my-lvmcluster 65m If the LVMCluster CR is not present, create an LVMCluster CR. For more information, see "Ways to create an LVMCluster custom resource". In the openshift-storage namespace, check that all the LVM Storage pods are in the Running state by running the following command: USD oc get pods -n openshift-storage Example output NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66m The output of this command must contain a running instance of the following pods: lvms-operator vg-manager topolvm-controller topolvm-node If the topolvm-node pod is stuck in the Init state, it is due to a failure to locate an available disk for LVM Storage to use. To retrieve the necessary information to troubleshoot this issue, review the logs of the vg-manager pod by running the following command: USD oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage Additional resources About the LVMCluster custom resource Ways to create an LVMCluster custom resource 4.12.4.18.3. Recovering from node failure A persistent volume claim (PVC) can be stuck in the Pending state due to a node failure in the cluster. To identify the failed node, you can examine the restart count of the topolvm-node pod. An increased restart count indicates potential problems with the underlying node, which might require further investigation and troubleshooting. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Examine the restart count of the topolvm-node pod instances by running the following command: USD oc get pods -n openshift-storage Example output NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m topolvm-node-54as8 4/4 Running 0 66m topolvm-node-78fft 4/4 Running 17 (8s ago) 66m vg-manager-r6zdv 1/1 Running 0 66m vg-manager-990ut 1/1 Running 0 66m vg-manager-an118 1/1 Running 0 66m steps If the PVC is stuck in the Pending state even after you have resolved any issues with the node, you must perform a forced clean-up. For more information, see "Performing a forced clean-up". Additional resources Performing a forced clean-up 4.12.4.18.4. Recovering from disk failure If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there can be a problem with the underlying volume or disk. Disk and volume provisioning issues result with a generic error message such as Failed to provision volume with storage class <storage_class_name> . The generic error message is followed by a specific volume failure error message. The following table describes the volume failure error messages: Table 4.9. Volume failure error messages Error message Description Failed to check volume existence Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures. Failed to bind volume Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC. FailedMount or FailedAttachVolume This error indicates problems when trying to mount the volume to a node. If the disk has failed, this error can appear when a pod tries to use the PVC. FailedUnMount This error indicates problems when trying to unmount a volume from a node. If the disk has failed, this error can appear when a pod tries to use the PVC. Volume is already exclusively attached to one node and cannot be attached to another This error can appear with storage solutions that do not support ReadWriteMany access modes. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. Procedure Inspect the events associated with a PVC by running the following command: USD oc describe pvc <pvc_name> 1 1 Replace <pvc_name> with the name of the PVC. Establish a direct connection to the host where the problem is occurring. Resolve the disk issue. steps If the volume failure messages persist or recur even after you have resolved the issue with the disk, you must perform a forced clean-up. For more information, see "Performing a forced clean-up". Additional resources Performing a forced clean-up 4.12.4.18.5. Performing a forced clean-up If the disk or node-related problems persist even after you have completed the troubleshooting procedures, you must perform a forced clean-up. A forced clean-up is used to address persistent issues and ensure the proper functioning of Logical Volume Manager (LVM) Storage. Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the OpenShift CLI ( oc ) as a user with cluster-admin permissions. You have deleted all the persistent volume claims (PVCs) that were created by using LVM Storage. You have stopped the pods that are using the PVCs that were created by using LVM Storage. Procedure Switch to the openshift-storage namespace by running the following command: USD oc project openshift-storage Check if the LogicalVolume custom resources (CRs) are present by running the following command: USD oc get logicalvolume If the LogicalVolume CRs are present, delete them by running the following command: USD oc delete logicalvolume <name> 1 1 Replace <name> with the name of the LogicalVolume CR. After deleting the LogicalVolume CRs, remove their finalizers by running the following command: USD oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1 1 Replace <name> with the name of the LogicalVolume CR. Check if the LVMVolumeGroup CRs are present by running the following command: USD oc get lvmvolumegroup If the LVMVolumeGroup CRs are present, delete them by running the following command: USD oc delete lvmvolumegroup <name> 1 1 Replace <name> with the name of the LVMVolumeGroup CR. After deleting the LVMVolumeGroup CRs, remove their finalizers by running the following command: USD oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1 1 Replace <name> with the name of the LVMVolumeGroup CR. Delete any LVMVolumeGroupNodeStatus CRs by running the following command: USD oc delete lvmvolumegroupnodestatus --all Delete the LVMCluster CR by running the following command: USD oc delete lvmcluster --all After deleting the LVMCluster CR, remove its finalizer by running the following command: USD oc patch lvmcluster <name> -p '{"metadata":{"finalizers":[]}}' --type=merge 1 1 Replace <name> with the name of the LVMCluster CR. | [
"cat << EOF | oc create -f - apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: <storage-class-name> 1 parameters: fsType: ext4 2 encrypted: \"true\" kmsKeyId: keyvalue 3 provisioner: ebs.csi.aws.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer EOF",
"cat << EOF | oc create -f - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mypvc spec: accessModes: - ReadWriteOnce volumeMode: Filesystem storageClassName: <storage-class-name> resources: requests: storage: 1Gi EOF",
"cat << EOF | oc create -f - kind: Pod metadata: name: mypod spec: containers: - name: httpd image: quay.io/centos7/httpd-24-centos7 ports: - containerPort: 80 volumeMounts: - mountPath: /mnt/storage name: data volumes: - name: data persistentVolumeClaim: claimName: mypvc EOF",
"oc edit machineset <machine-set-name>",
"apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: spec: metadata: labels: disk: ultrassd 1 providerSpec: value: ultraSSDCapability: Enabled 2",
"oc create -f <machine-set-name>.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: ultra-disk-sc 1 parameters: cachingMode: None diskIopsReadWrite: \"2000\" 2 diskMbpsReadWrite: \"320\" 3 kind: managed skuname: UltraSSD_LRS provisioner: disk.csi.azure.com 4 reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer 5",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ultra-disk 1 spec: accessModes: - ReadWriteOnce storageClassName: ultra-disk-sc 2 resources: requests: storage: 4Gi 3",
"apiVersion: v1 kind: Pod metadata: name: nginx-ultra spec: nodeSelector: disk: ultrassd 1 containers: - name: nginx-ultra image: alpine:latest command: - \"sleep\" - \"infinity\" volumeMounts: - mountPath: \"/mnt/azure\" name: volume volumes: - name: volume persistentVolumeClaim: claimName: ultra-disk 2",
"oc get machines",
"oc debug node/<node-name> -- chroot /host lsblk",
"apiVersion: v1 kind: Pod metadata: name: ssd-benchmark1 spec: containers: - name: ssd-benchmark1 image: nginx ports: - containerPort: 80 name: \"http-server\" volumeMounts: - name: lun0p1 mountPath: \"/tmp\" volumes: - name: lun0p1 hostPath: path: /var/lib/lun0p1 type: DirectoryOrCreate nodeSelector: disktype: ultrassd",
"StorageAccountType UltraSSD_LRS can be used only when additionalCapabilities.ultraSSDEnabled is set.",
"oc -n <stuck_pod_namespace> describe pod <stuck_pod_name>",
"oc create secret generic <secret-name> --from-literal=azurestorageaccountname=<storage-account> \\ 1 --from-literal=azurestorageaccountkey=<storage-account-key> 2",
"apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" storageClassName: azure-file-sc azureFile: secretName: <secret-name> 3 shareName: share-1 4 readOnly: false",
"apiVersion: \"v1\" kind: \"PersistentVolumeClaim\" metadata: name: \"claim1\" 1 spec: accessModes: - \"ReadWriteOnce\" resources: requests: storage: \"5Gi\" 2 storageClassName: azure-file-sc 3 volumeName: \"pv0001\" 4",
"apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: volumeMounts: - mountPath: \"/data\" 2 name: azure-file-share volumes: - name: azure-file-share persistentVolumeClaim: claimName: claim1 3",
"apiVersion: \"v1\" kind: \"PersistentVolume\" metadata: name: \"pv0001\" 1 spec: capacity: storage: \"5Gi\" 2 accessModes: - \"ReadWriteOnce\" cinder: 3 fsType: \"ext3\" 4 volumeID: \"f37a03aa-6212-4c62-a805-9ce139fab180\" 5",
"oc create -f cinder-persistentvolume.yaml",
"oc create serviceaccount <service_account>",
"oc adm policy add-scc-to-user <new_scc> -z <service_account> -n <project>",
"apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always serviceAccountName: <service_account> 6 securityContext: fsGroup: 7777 7",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce fc: wwids: [scsi-3600508b400105e210000900000490000] 1 targetWWNs: ['500a0981891b8dc5', '500a0981991b8dc5'] 2 lun: 2 3 fsType: ext4",
"{ \"fooServer\": \"192.168.0.1:1234\", 1 \"fooVolumeName\": \"bar\", \"kubernetes.io/fsType\": \"ext4\", 2 \"kubernetes.io/readwrite\": \"ro\", 3 \"kubernetes.io/secret/<key name>\": \"<key value>\", 4 \"kubernetes.io/secret/<another key name>\": \"<another key value>\", }",
"{ \"status\": \"<Success/Failure/Not supported>\", \"message\": \"<Reason for success/failure>\" }",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce flexVolume: driver: openshift.com/foo 3 fsType: \"ext4\" 4 secretRef: foo-secret 5 readOnly: true 6 options: 7 fooServer: 192.168.0.1:1234 fooVolumeName: bar",
"\"fsType\":\"<FS type>\", \"readwrite\":\"<rw>\", \"secret/key1\":\"<secret1>\" \"secret/keyN\":\"<secretN>\"",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.16.154.81:3260 iqn: iqn.2014-12.example.server:storage.target00 lun: 0 fsType: 'ext4'",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 chapAuthDiscovery: true 1 chapAuthSession: true 2 secretRef: name: chap-secret 3",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] 1 iqn: iqn.2016-04.test.com:storage.target00 lun: 0 fsType: ext4 readOnly: false",
"apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 10.0.0.1:3260 portals: ['10.0.2.16:3260', '10.0.2.17:3260', '10.0.2.18:3260'] iqn: iqn.2016-04.test.com:storage.target00 lun: 0 initiatorName: iqn.2016-04.test.com:custom.iqn 1 fsType: ext4 readOnly: false",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 nfs: 4 path: /tmp 5 server: 172.17.0.2 6 persistentVolumeReclaimPolicy: Retain 7",
"oc get pv",
"NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE pv0001 <none> 5Gi RWO Available 31s",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: nfs-claim1 spec: accessModes: - ReadWriteOnce 1 resources: requests: storage: 5Gi 2 volumeName: pv0001 storageClassName: \"\"",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE nfs-claim1 Bound pv0001 5Gi RWO 2m",
"ls -lZ /opt/nfs -d",
"drwxrws---. nfsnobody 5555 unconfined_u:object_r:usr_t:s0 /opt/nfs",
"id nfsnobody",
"uid=65534(nfsnobody) gid=65534(nfsnobody) groups=65534(nfsnobody)",
"spec: containers: - name: securityContext: 1 supplementalGroups: [5555] 2",
"spec: containers: 1 - name: securityContext: runAsUser: 65534 2",
"setsebool -P virt_use_nfs 1",
"/<example_fs> *(rw,root_squash)",
"iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 2049 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 20048 -j ACCEPT",
"iptables -I INPUT 1 -p tcp --dport 111 -j ACCEPT",
"apiVersion: v1 kind: PersistentVolume metadata: name: nfs1 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"",
"apiVersion: v1 kind: PersistentVolume metadata: name: nfs2 spec: capacity: storage: 1Mi accessModes: - ReadWriteMany nfs: server: 192.168.1.1 path: \"/\"",
"echo 'Y' > /sys/module/nfsd/parameters/nfs4_disable_idmapping",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 1Gi 3",
"oc create -f pvc.yaml",
"vmkfstools -c <size> /vmfs/volumes/<datastore-name>/volumes/<disk-name>.vmdk",
"shell vmware-vdiskmanager -c -t 0 -s <size> -a lsilogic <disk-name>.vmdk",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv1 1 spec: capacity: storage: 1Gi 2 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain vsphereVolume: 3 volumePath: \"[datastore1] volumes/myDisk\" 4 fsType: ext4 5",
"oc create -f pv1.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc1 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: \"1Gi\" 3 volumeName: pv1 4",
"oc create -f pvc1.yaml",
"oc adm new-project openshift-local-storage",
"oc annotate namespace openshift-local-storage openshift.io/node-selector=''",
"oc annotate namespace openshift-local-storage workload.openshift.io/allowed='management'",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: local-operator-group namespace: openshift-local-storage spec: targetNamespaces: - openshift-local-storage --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: local-storage-operator namespace: openshift-local-storage spec: channel: stable installPlanApproval: Automatic 1 name: local-storage-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc apply -f openshift-local-storage.yaml",
"oc -n openshift-local-storage get pods",
"NAME READY STATUS RESTARTS AGE local-storage-operator-746bf599c9-vlt5t 1/1 Running 0 19m",
"oc get csvs -n openshift-local-storage",
"NAME DISPLAY VERSION REPLACES PHASE local-storage-operator.4.2.26-202003230335 Local Storage 4.2.26-202003230335 Succeeded",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-140-183 - ip-10-0-158-139 - ip-10-0-164-33 storageClassDevices: - storageClassName: \"local-sc\" 3 volumeMode: Filesystem 4 fsType: xfs 5 devicePaths: 6 - /path/to/device 7",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" 1 spec: nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-136-143 - ip-10-0-140-255 - ip-10-0-144-180 storageClassDevices: - storageClassName: \"local-sc\" 3 volumeMode: Block 4 devicePaths: 5 - /path/to/device 6",
"oc create -f <local-volume>.yaml",
"oc get all -n openshift-local-storage",
"NAME READY STATUS RESTARTS AGE pod/diskmaker-manager-9wzms 1/1 Running 0 5m43s pod/diskmaker-manager-jgvjp 1/1 Running 0 5m43s pod/diskmaker-manager-tbdsj 1/1 Running 0 5m43s pod/local-storage-operator-7db4bd9f79-t6k87 1/1 Running 0 14m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/local-storage-operator-metrics ClusterIP 172.30.135.36 <none> 8383/TCP,8686/TCP 14m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/diskmaker-manager 3 3 3 3 3 <none> 5m43s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/local-storage-operator 1/1 1 1 14m NAME DESIRED CURRENT READY AGE replicaset.apps/local-storage-operator-7db4bd9f79 1 1 1 14m",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m",
"apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-filesystem spec: capacity: storage: 100Gi volumeMode: Filesystem 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node",
"apiVersion: v1 kind: PersistentVolume metadata: name: example-pv-block spec: capacity: storage: 100Gi volumeMode: Block 1 accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Delete storageClassName: local-sc 2 local: path: /dev/xvdf 3 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - example-node",
"oc create -f <example-pv>.yaml",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE example-pv-filesystem 100Gi RWO Delete Available local-sc 3m47s example-pv1 1Gi RWO Delete Bound local-storage/pvc1 local-sc 12h example-pv2 1Gi RWO Delete Bound local-storage/pvc2 local-sc 12h example-pv3 1Gi RWO Delete Bound local-storage/pvc3 local-sc 12h",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: local-pvc-name 1 spec: accessModes: - ReadWriteOnce volumeMode: Filesystem 2 resources: requests: storage: 100Gi 3 storageClassName: local-sc 4",
"oc create -f <local-pvc>.yaml",
"apiVersion: v1 kind: Pod spec: containers: volumeMounts: - name: local-disks 1 mountPath: /data 2 volumes: - name: local-disks persistentVolumeClaim: claimName: local-pvc-name 3",
"oc create -f <local-pod>.yaml",
"apiVersion: local.storage.openshift.io/v1alpha1 kind: LocalVolumeSet metadata: name: example-autodetect spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 storageClassName: local-sc 1 volumeMode: Filesystem fsType: ext4 maxDeviceCount: 10 deviceInclusionSpec: deviceTypes: 2 - disk - part deviceMechanicalProperties: - NonRotational minSize: 10G maxSize: 100G models: - SAMSUNG - Crucial_CT525MX3 vendors: - ATA - ST2000LM",
"oc apply -f local-volume-set.yaml",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-1cec77cf 100Gi RWO Delete Available local-sc 88m local-pv-2ef7cd2a 100Gi RWO Delete Available local-sc 82m local-pv-3fa1c73 100Gi RWO Delete Available local-sc 48m",
"apiVersion: \"local.storage.openshift.io/v1\" kind: \"LocalVolume\" metadata: name: \"local-disks\" namespace: \"openshift-local-storage\" spec: tolerations: - key: localstorage 1 operator: Equal 2 value: \"localstorage\" 3 storageClassDevices: - storageClassName: \"local-sc\" volumeMode: Block 4 devicePaths: 5 - /dev/xvdg",
"spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists",
"oc edit localvolume <name> -n openshift-local-storage",
"oc delete pv <pv-name>",
"oc debug node/<node-name> -- chroot /host rm -rf /mnt/local-storage/<sc-name> 1",
"oc delete localvolume --all --all-namespaces oc delete localvolumeset --all --all-namespaces oc delete localvolumediscovery --all --all-namespaces",
"oc delete pv <pv-name>",
"oc delete project openshift-local-storage",
"apiVersion: v1 kind: Pod metadata: name: test-host-mount spec: containers: - image: registry.access.redhat.com/ubi9/ubi name: test-container command: ['sh', '-c', 'sleep 3600'] volumeMounts: - mountPath: /host name: host-slash volumes: - name: host-slash hostPath: path: / type: ''",
"apiVersion: v1 kind: PersistentVolume metadata: name: task-pv-volume 1 labels: type: local spec: storageClassName: manual 2 capacity: storage: 5Gi accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain hostPath: path: \"/mnt/data\" 4",
"oc create -f pv.yaml",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: task-pvc-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: manual",
"oc create -f pvc.yaml",
"apiVersion: v1 kind: Pod metadata: name: pod-name 1 spec: containers: securityContext: privileged: true 2 volumeMounts: - mountPath: /data 3 name: hostpath-privileged securityContext: {} volumes: - name: hostpath-privileged persistentVolumeClaim: claimName: task-pvc-volume 4",
"apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage",
"oc create -f <file_name>",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage",
"oc create -f <file_name>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace",
"oc create -f <file_name>",
"oc get csv -n openshift-storage -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase 4.13.0-202301261535 Succeeded",
"kind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 1 storageConfig: 2 registry: imageURL: example.com/mirror/oc-mirror-metadata 3 skipTLS: false mirror: platform: channels: - name: stable-4.14 4 type: ocp graph: true 5 operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14 6 packages: - name: lvms-operator 7 channels: - name: stable 8 additionalImages: - name: registry.redhat.io/ubi9/ubi:latest 9 helm: {}",
"oc create ns <namespace>",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: 1 matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: 2 apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: \"true\" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-storage - complianceType: musthave objectDefinition: 3 apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: musthave objectDefinition: 4 apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low",
"oc create -f <file_name> -n <namespace>",
"apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: tolerations: - effect: NoSchedule key: xyz operator: Equal value: \"true\" storage: deviceClasses: - name: vg1 fstype: ext4 1 default: true nodeSelector: 2 nodeSelectorTerms: - matchExpressions: - key: mykey operator: In values: - ssd deviceSelector: 3 paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 thinPoolConfig: name: thin-pool-1 sizePercent: 90 4 overprovisionRatio: 10",
"apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 nodeSelector: 2 deviceSelector: 3 thinPoolConfig: 4",
"oc create -f <file_name>",
"lvmcluster/lvmcluster created",
"oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status.state}' -n <namespace>",
"{\"deviceClassStatuses\": 1 [ { \"name\": \"vg1\", \"nodeStatus\": [ 2 { \"devices\": [ 3 \"/dev/nvme0n1\", \"/dev/nvme1n1\", \"/dev/nvme2n1\" ], \"node\": \"kube-node\", 4 \"status\": \"Ready\" 5 } ] } ] \"state\":\"Ready\"} 6",
"status: deviceClassStatuses: - name: vg1 nodeStatus: - node: my-node-1.example.com reason: no available devices found for volume group status: Failed state: Failed",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m",
"oc get volumesnapshotclass",
"NAME DRIVER DELETIONPOLICY AGE lvms-vg1 topolvm.io Delete 24h",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms namespace: openshift-storage spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: 1 deviceSelector: 2 thinPoolConfig: 3 nodeSelector: 4 remediationAction: enforce severity: low",
"oc create -f <file_name> -n <cluster_namespace> 1",
"oc delete lvmcluster <lvmclustername> -n openshift-storage",
"oc get lvmcluster -n <namespace>",
"No resources found in openshift-storage namespace.",
"oc delete -f <file_name> -n <cluster_namespace> 1",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-delete annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal spec: remediationAction: enforce 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-delete placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-delete subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-delete --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-delete spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: 3 matchExpressions: - key: mykey operator: In values: - myvalue",
"oc create -f <file_name> -n <namespace>",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-inform annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal-inform spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-storage 2 --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-check placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-check subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-inform --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-check spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue",
"oc create -f <file_name> -n <namespace>",
"oc get policy -n <namespace>",
"NAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15m",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-block-1 1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Block 2 resources: requests: storage: 10Gi 3 storageClassName: lvms-vg1 4",
"oc create -f <file_name> -n <application_namespace>",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1 Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s",
"oc edit <lvmcluster_file_name> -n <namespace>",
"apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1",
"apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1",
"oc edit -f <file_name> -ns <namespace> 1",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: storage: deviceClasses: deviceSelector: 1 paths: 2 - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 optionalPaths: 3 - /dev/disk/by-path/pci-0000:89:00.0-nvme-1",
"oc patch pvc <pvc_name> -n <application_namespace> -p \\ 1 '{ \"spec\": { \"resources\": { \"requests\": { \"storage\": \"<desired_size>\" }}}}' --type=merge 2",
"oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage}",
"oc delete pvc <pvc_name> -n <namespace>",
"oc get pvc -n <namespace>",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: lvm-block-1-snap 1 spec: source: persistentVolumeClaimName: lvm-block-1 2 volumeSnapshotClassName: lvms-vg1 3",
"oc get volumesnapshotclass",
"oc create -f <file_name> -n <namespace>",
"oc get volumesnapshot -n <namespace>",
"NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE lvm-block-1-snap true lvms-test-1 1Gi lvms-vg1 snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91 19s 19s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-block-1-restore spec: accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi 1 storageClassName: lvms-vg1 2 dataSource: name: lvm-block-1-snap 3 kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io",
"oc create -f <file_name> -n <namespace>",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-restore Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s",
"oc delete volumesnapshot <volume_snapshot_name> -n <namespace>",
"oc get volumesnapshot -n <namespace>",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-pvc-clone spec: accessModes: - ReadWriteOnce storageClassName: lvms-vg1 1 volumeMode: Filesystem 2 dataSource: kind: PersistentVolumeClaim name: lvm-pvc 3 resources: requests: storage: 1Gi 4",
"oc create -f <file_name> -n <namespace>",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-clone Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s",
"oc delete pvc <clone_pvc_name> -n <namespace>",
"oc get pvc -n <namespace>",
"oc patch subscription lvms-operator -n openshift-storage --type merge --patch '{\"spec\":{\"channel\":\"<update_channel>\"}}' 1",
"oc get events -n openshift-storage",
"8m13s Normal RequirementsUnknown clusterserviceversion/lvms-operator.v4.14 requirements not yet checked 8m11s Normal RequirementsNotMet clusterserviceversion/lvms-operator.v4.14 one or more requirements couldn't be found 7m50s Normal AllRequirementsMet clusterserviceversion/lvms-operator.v4.14 all requirements found, attempting install 7m50s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.14 waiting for install components to report healthy 7m49s Normal InstallWaiting clusterserviceversion/lvms-operator.v4.14 installing: waiting for deployment lvms-operator to become ready: deployment \"lvms-operator\" waiting for 1 outdated replica(s) to be terminated 7m39s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.14 install strategy completed with no errors",
"oc get subscription lvms-operator -n openshift-storage -o jsonpath='{.status.installedCSV}'",
"lvms-operator.v4.14",
"openshift.io/cluster-monitoring=true",
"oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSV",
"currentCSV: lvms-operator.v4.15.3",
"oc delete subscription.operators.coreos.com lvms-operator -n <namespace>",
"subscription.operators.coreos.com \"lvms-operator\" deleted",
"oc delete clusterserviceversion <currentCSV> -n <namespace> 1",
"clusterserviceversion.operators.coreos.com \"lvms-operator.v4.15.3\" deleted",
"oc get csv -n <namespace>",
"oc delete -f <policy> -n <namespace> 1",
"apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-uninstall-lvms spec: clusterConditions: - status: \"True\" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-uninstall-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-uninstall-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: uninstall-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: uninstall-lvms spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: uninstall-lvms spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-storage spec: targetNamespaces: - openshift-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-storage remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-remove-lvms-crds spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: logicalvolumes.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmclusters.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroupnodestatuses.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroups.lvm.topolvm.io remediationAction: enforce severity: high",
"oc create -f <policy> -ns <namespace>",
"oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.14 --dest-dir=<directory_name>",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11s",
"oc describe pvc <pvc_name> 1",
"Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io \"lvms-vg1\" not found",
"oc get lvmcluster -n openshift-storage",
"NAME AGE my-lvmcluster 65m",
"oc get pods -n openshift-storage",
"NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66m",
"oc logs -l app.kubernetes.io/component=vg-manager -n openshift-storage",
"oc get pods -n openshift-storage",
"NAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m topolvm-node-54as8 4/4 Running 0 66m topolvm-node-78fft 4/4 Running 17 (8s ago) 66m vg-manager-r6zdv 1/1 Running 0 66m vg-manager-990ut 1/1 Running 0 66m vg-manager-an118 1/1 Running 0 66m",
"oc describe pvc <pvc_name> 1",
"oc project openshift-storage",
"oc get logicalvolume",
"oc delete logicalvolume <name> 1",
"oc patch logicalvolume <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1",
"oc get lvmvolumegroup",
"oc delete lvmvolumegroup <name> 1",
"oc patch lvmvolumegroup <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1",
"oc delete lvmvolumegroupnodestatus --all",
"oc delete lvmcluster --all",
"oc patch lvmcluster <name> -p '{\"metadata\":{\"finalizers\":[]}}' --type=merge 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage/configuring-persistent-storage |
Chapter 3. Performing basic operations with the Block Storage service (cinder) | Chapter 3. Performing basic operations with the Block Storage service (cinder) Create and configure Block Storage volumes as the primary form of persistent storage for Compute instances in your overcloud. Create volumes, attach your volumes to instances, edit and resize your volumes, and modify volume ownership. 3.1. Creating Block Storage volumes Create volumes to provide persistent storage for instances that you launch with the Compute service (nova) in the overcloud. To create an encrypted volume, you must first have a volume type configured specifically for volume encryption. In addition, you must configure both Compute and Block Storage services to use the same static key. For information about how to set up the requirements for volume encryption, see Block Storage service (cinder) volume encryption . Important The default maximum number of volumes you can create for a project is 10. Prerequisites Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Managing cloud resources with the OpenStack Dashboard . Procedure Log into the dashboard. Select Project > Compute > Volumes . Click Create Volume , and edit the following fields: Field Description Volume name Name of the volume. Description Optional, short description of the volume. Type Optional volume type. For more information, see Group volume configuration with volume types . If you create a volume and do not specify a volume type, then Block Storage uses the default volume type. For more information on defining default volume types, see Defining a project-specific default volume type . If you do not specify a back end, the Block Storage scheduler will try to select a suitable back end for you. For more information, see Volume allocation on multiple back ends . Note If there is no suitable back end then the volume will not be created. You can also change the volume type after the volume has been created. For more information, see Block Storage volume retyping . Size (GB) Volume size (in gigabytes). If you want to create an encrypted volume from an unencrypted image, you must ensure that the volume size is larger than the image size so that the encryption data does not truncate the volume data. Availability Zone Availability zones (logical server groups), along with host aggregates, are a common method for segregating resources within OpenStack. Availability zones are defined during installation. For more information about availability zones and host aggregates, see Creating and managing host aggregates in the Configuring the Compute service for instance creation guide. Specify a Volume Source : Source Description No source, empty volume The volume is empty and does not contain a file system or partition table. Snapshot Use an existing snapshot as a volume source. If you select this option, a new Use snapshot as a source list opens; you can then choose a snapshot from the list. If you want to create a new volume from a snapshot of an encrypted volume, you must ensure that the new volume is at least 1GB larger than the old volume. For more information about volume snapshots, see Creating new volumes from snapshots . Image Use an existing image as a volume source. If you select this option, a new Use snapshot as a source list opens; you can then choose an image from the list. Volume Use an existing volume as a volume source. If you select this option, a new Use snapshot as a source list opens; you can then choose a volume from the list. Click Create Volume . After the volume is created, its name appears in the Volumes table. 3.2. Editing a volume name or description You can change the names and descriptions of your volumes in the dashboard. Prerequisites Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Managing cloud resources with the OpenStack Dashboard . Procedure Log into the dashboard. Select Project > Compute > Volumes . Select the volume's Edit Volume button. Edit the volume name or description as required. Click Edit Volume to save your changes. 3.3. Resizing (extending) a Block Storage service volume Resize volumes to increase the storage capacity of the volumes. Note The ability to resize a volume in use is supported but is driver dependent. RBD is supported. You cannot extend in-use multi-attach volumes. For more information about support for this feature, contact Red Hat Support. Procedure Source your credentials file. List the volumes to retrieve the ID of the volume you want to extend: Increase the size of the volume: Replace <volume_id> with the ID of the volume you want to extend. Replace <size> with the required size of this volume, in gigabytes. Note Ensure that the specified size is greater than the existing size of this volume. For example: 3.4. Deleting a Block Storage service volume You can delete volumes that you no longer require. Note You cannot delete a volume if it has existing snapshots. For more information about deleting snapshots, see Deleting volume snapshots . Prerequisites Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Managing cloud resources with the OpenStack Dashboard . Procedure Log into the dashboard. Select Project > Compute > Volumes . In the Volumes table, select the volume to delete. Click Delete Volumes . 3.5. Volume allocation on multiple back ends When you create a volume, you can select the volume type for the required back end from the Type list. For more information, see Creating Block Storage volumes . Note If the Block Storage service (cinder) is configured to use multiple back ends, then a volume type must be created for each back end. If you do not specify a back end when creating the volume, the Block Storage scheduler will try to select a suitable back end for you. The scheduler uses filters, for the following default associated settings of the volume, to select suitable back ends: AvailabilityZoneFilter Filters out all back ends that do not meet the availability zone requirements of the requested volume. CapacityFilter Selects only back ends with enough space to accommodate the volume. CapabilitiesFilter Selects only back ends that can support any specified settings in the volume. InstanceLocality Configures clusters to use volumes local to the same node. If there is more than one suitable back end, then the scheduler uses a weighting method to pick the best back end. By default, the CapacityWeigher method is used, so that the filtered back end with the most available free space is selected. Note If there is no suitable back end then the volume will not be created. Additional resources Creating and configuring a volume type Block Storage volume retyping Configuring the default Block Storage scheduler filters 3.6. Attaching a volume to an instance When you close an instance all the data is lost. You can attach a volume for persistent storage. You can attach a volume to only one instance at a time, unless it has a multi-attach volume type. For more information about creating multi-attach volumes, see Volumes that can be attached to multiple instances . Prerequisites Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Managing cloud resources with the OpenStack Dashboard . Procedure Log into the dashboard. Select Project > Compute > Volumes . Select the Edit Attachments action. If the volume is not attached to an instance, the Attach To Instance drop-down list is visible. From the Attach To Instance list, select the instance to which you want to attach the volume. Click Attach Volume . 3.7. Detaching a volume from an instance You must detach a volume from an instance when you want to attach this volume to another instance, unless it has a multi-attach volume type. You must also detach a volume to change the access permissions to the volume or to delete the volume. Prerequisites Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Managing cloud resources with the OpenStack Dashboard . Procedure Log into the dashboard. Select Project > Compute > Volumes . Select the volume's Manage Attachments action. If the volume is attached to an instance, the instance's name is displayed in the Attachments table. Click Detach Volume in this and the dialog screen. steps Attaching a volume to an instance 3.8. Configuring the access rights to a volume The default state of a volume is read-write to allow data to be written to and read from it. You can mark a volume as read-only to protect its data from being accidentally overwritten or deleted. Note After changing a volume to be read-only you can change it back to read-write again. Prerequisites If the volume is already attached to an instance, then detach this volume. For more information, see Detaching a volume from an instance . Procedure Source your credentials file. List the volumes to retrieve the ID of the volume you want to configure: Set the required access rights for this volume: To set the access rights of a volume to read-only: Replace <volume_id> with the ID of the required volume. To set the access rights of a volume to read-write: If you detached this volume from an instance to change the access rights, then re-attach the volume. For more information, see Attaching a volume to an instance . 3.9. Changing a volume owner with the Dashboard To change a volume's owner, you will have to perform a volume transfer. A volume transfer is initiated by the volume's owner, and the volume's change in ownership is complete after the transfer is accepted by the volume's new owner. Prerequisites Access to the Red Hat OpenStack Platform (RHOSP) Dashboard (horizon). For more information, see Managing cloud resources with the OpenStack Dashboard . Procedure Log into the dashboard as the volume owner. Select Projects > Volumes . In the Actions column of the volume to transfer, select Create Transfer . In the Create Transfer dialog box, enter a name for the transfer and click Create Volume Transfer . The volume transfer is created, and in the Volume Transfer screen you can capture the transfer ID and the authorization key to send to the recipient project. Click the Download transfer credentials button to download a .txt file containing the transfer name , transfer ID , and authorization key . Note The authorization key is available only in the Volume Transfer screen. If you lose the authorization key, you must cancel the transfer and create another transfer to generate a new authorization key. Close the Volume Transfer screen to return to the volume list. The volume status changes to awaiting-transfer until the recipient project accepts the transfer Accept a volume transfer from the dashboard Log into the dashboard as the recipient project owner. Select Projects > Volumes . Click Accept Transfer . In the Accept Volume Transfer dialog box, enter the transfer ID and the authorization key that you received from the volume owner and click Accept Volume Transfer . The volume now appears in the volume list for the active project. 3.10. Changing a volume owner with the CLI To change a volume's owner, you will have to perform a volume transfer. A volume transfer is initiated by the volume's owner, and the volume's change in ownership is complete after the transfer is accepted by the volume's new owner. Procedure Log in as the volume's current owner. List the available volumes: Initiate the volume transfer: Replace <volume> with the name or ID of the volume you wish to transfer. For example: The cinder transfer-create command clears the ownership of the volume and creates an id and auth_key for the transfer. These values can be given to, and used by, another user to accept the transfer and become the new owner of the volume. The new user can now claim ownership of the volume. To do so, the user should first log in from the command line and run: Replace <transfer_id> with the id value returned by the cinder transfer-create command. Replace <transfer_key> with the auth_key value returned by the cinder transfer-create command. For example: Note You can view all available volume transfers using: | [
"cinder list",
"cinder extend <volume_id> <size>",
"cinder extend 573e024d-5235-49ce-8332-be1576d323f8 10",
"cinder list",
"cinder readonly-mode-update <volume_id> true",
"cinder readonly-mode-update <volume_id> false",
"cinder list",
"cinder transfer-create <volume>",
"+------------+--------------------------------------+ | Property | Value | +------------+--------------------------------------+ | auth_key | f03bf51ce7ead189 | | created_at | 2014-12-08T03:46:31.884066 | | id | 3f5dc551-c675-4205-a13a-d30f88527490 | | name | None | | volume_id | bcf7d015-4843-464c-880d-7376851ca728 | +------------+--------------------------------------+",
"cinder transfer-accept <transfer_id> <transfer_key>",
"cinder transfer-accept 3f5dc551-c675-4205-a13a-d30f88527490 f03bf51ce7ead189",
"cinder transfer-list"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_persistent_storage/assembly_performing-basic-operations-with-block-storage_configuring-cinder |
Part I. Upgrading your Red Hat build of OptaPlanner projects to OptaPlanner 8 | Part I. Upgrading your Red Hat build of OptaPlanner projects to OptaPlanner 8 If you have OptaPlanner projects that you created with the OptaPlanner 7 or earlier pubic API and you want to upgrade your project code to OptaPlanner 8, review the information in this guide. This guide also includes changes to implementation classes which are outside of the pubic API. The OptaPlanner public API is a subset of the OptaPlanner source code that enables you to interact with OptaPlanner through Java code. So that you can upgrade to higher OptaPlanner versions within the same major release, OptaPlanner follows semantic versioning . This means that you can upgrade from OptaPlanner 7.44 to OptaPlanner 7.48 for example without breaking your code that uses the OptaPlanner public API. The OptaPlanner public API classes are compatible within the versions of a major OptaPlanner release. However, when Red Hat releases a new major release, disrupting changes are sometimes introduced to the public API. OptaPlanner 8 is a new major release and some of the changes to the public API are not are not compatible with earlier versions of OptaPlanner. OptaPlanner 8 will be the foundation for the 8.x series for the few years. The changes to the public API that are not compatible with earlier versions that were required for this release were made for the long term benefit of this project. Table 1. Red Hat Process Automation Manager and Red Hat build of OptaPlanner versions Process Automation Manager OptaPlanner 7.7 7.33 7.8 7.39 7.9 7.44 7.10 7.48 7.11 8.5 Every upgrade note has a label that indicates how likely it is that your code will be affected by that change. The following table describes each label: Table 2. Upgrade impact labels Label Impact Major Likely to affect your code. Minor Unlikely to affect your code, especially if you followed the examples, unless you have customized the code extensively. Any changes that are not compatible with earlier versions of OptaPlanner are annotated with the Public API tag. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/assembly-optimizer-migration-8_developing-solvers |
Getting started with Ansible Playbooks | Getting started with Ansible Playbooks Red Hat Ansible Automation Platform 2.4 Getting started with ansible playbooks Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/getting_started_with_ansible_playbooks/index |
Chapter 2. Requirements | Chapter 2. Requirements 2.1. Red Hat Virtualization Manager Requirements 2.1.1. Hardware Requirements The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load. Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Red Hat certified hardware . Table 2.1. Red Hat Virtualization Manager Hardware Requirements Resource Minimum Recommended CPU A dual core x86_64 CPU. A quad core x86_64 CPU or multiple dual core x86_64 CPUs. Memory 4 GB of available system RAM if Data Warehouse is not installed and if memory is not being consumed by existing processes. 16 GB of system RAM. Hard Disk 25 GB of locally accessible, writable disk space. 50 GB of locally accessible, writable disk space. You can use the RHV Manager History Database Size Calculator to calculate the appropriate disk space for the Manager history database size. Network Interface 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 2.1.2. Browser Requirements The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal. Browser support is divided into tiers: Tier 1: Browser and operating system combinations that are fully tested and fully supported. Red Hat Engineering is committed to fixing issues with browsers on this tier. Tier 2: Browser and operating system combinations that are partially tested, and are likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with browsers on this tier. Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal support is provided for this tier. Red Hat Engineering will attempt to fix only minor issues with browsers on this tier. Table 2.2. Browser Requirements Support Tier Operating System Family Browser Tier 1 Red Hat Enterprise Linux Mozilla Firefox Extended Support Release (ESR) version Any Most recent version of Google Chrome, Mozilla Firefox, or Microsoft Edge Tier 2 Tier 3 Any Earlier versions of Google Chrome or Mozilla Firefox Any Other browsers 2.1.3. Client Requirements Virtual machine consoles can only be accessed using supported Remote Viewer ( virt-viewer ) clients on Red Hat Enterprise Linux and Windows. To install virt-viewer , see Installing Supporting Components on Client Machines in the Virtual Machine Management Guide . Installing virt-viewer requires Administrator privileges. You can access virtual machine consoles using the SPICE, VNC, or RDP (Windows only) protocols. You can install the QXLDOD graphical driver in the guest operating system to improve the functionality of SPICE. SPICE currently supports a maximum resolution of 2560x1600 pixels. Client Operating System SPICE Support Supported QXLDOD drivers are available on Red Hat Enterprise Linux 7.2 and later, and Windows 10. Note SPICE may work with Windows 8 or 8.1 using QXLDOD drivers, but it is neither certified nor tested. 2.1.4. Operating System Requirements The Red Hat Virtualization Manager must be installed on a base installation of Red Hat Enterprise Linux 8.6. Do not install any additional packages after the base installation, as they may cause dependency issues when attempting to install the packages required by the Manager. Do not enable additional repositories other than those required for the Manager installation. 2.2. Host Requirements Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Find a certified solution . For more information on the requirements and limitations that apply to guests see Red Hat Enterprise Linux Technology Capabilities and Limits and Supported Limits for Red Hat Virtualization . 2.2.1. CPU Requirements All CPUs must have support for the Intel(R) 64 or AMD64 CPU extensions, and the AMD-VTM or Intel VT(R) hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required. The following CPU models are supported: AMD Opteron G4 Opteron G5 EPYC Intel Nehalem Westmere SandyBridge IvyBridge Haswell Broadwell Skylake Client Skylake Server Cascadelake Server IBM POWER8 POWER9 For each CPU model with security updates, the CPU Type lists a basic type and a secure type. For example: Intel Cascadelake Server Family Secure Intel Cascadelake Server Family The Secure CPU type contains the latest updates. For details, see BZ# 1731395 2.2.1.1. Checking if a Processor Supports the Required Flags You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied. Procedure At the Red Hat Enterprise Linux or Red Hat Virtualization Host boot screen, press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. Ensure there is a space after the last kernel parameter listed, and append the parameter rescue . Press Enter to boot into rescue mode. At the prompt, determine that your processor has the required extensions and that they are enabled by running this command: If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer. 2.2.2. Memory Requirements The minimum required RAM is 2 GB. For cluster levels 4.2 to 4.5, the maximum supported RAM per VM in Red Hat Virtualization Host is 6 TB. For cluster levels 4.6 to 4.7, the maximum supported RAM per VM in Red Hat Virtualization Host is 16 TB. However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap. 2.2.3. Storage Requirements Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. Red Hat Virtualization Host (RHVH) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If RHVH boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade. The minimum storage requirements of RHVH are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of RHVH. The minimum storage requirements for host installation are listed below. However, use the default allocations, which use more storage space. / (root) - 6 GB /home - 1 GB /tmp - 1 GB /boot - 1 GB /var - 5 GB /var/crash - 10 GB /var/log - 8 GB /var/log/audit - 2 GB /var/tmp - 10 GB swap - 1 GB. See What is the recommended swap size for Red Hat platforms? for details. Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported. Minimum Total - 64 GiB If you are also installing the RHV-M Appliance for self-hosted engine installation, /var/tmp must be at least 10 GB. If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization . 2.2.4. PCI Device Requirements Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Each host should have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available. For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine . 2.2.5. Device Assignment Requirements If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met: CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. CPU root ports used must support ACS or ACS-equivalent capability. PCIe devices must support ACS or ACS-equivalent capability. All PCIe switches and bridges between the PCIe device and the root port should support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine. For GPU support, Red Hat Enterprise Linux 8 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card. Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v command can be used to print information for PCI devices already installed on a system. 2.2.6. vGPU Requirements A host must meet the following requirements in order for virtual machines on that host to use a vGPU: vGPU-compatible GPU GPU-enabled host kernel Installed GPU with correct drivers Select a vGPU type and the number of instances that you would like to use with this virtual machine using the Manage vGPU dialog in the Administration Portal Host Devices tab of the virtual machine. vGPU-capable drivers installed on each host in the cluster vGPU-supported virtual machine operating system with vGPU drivers installed 2.3. Networking requirements 2.3.1. General requirements Red Hat Virtualization requires IPv6 to remain enabled on the physical or virtual machine running the Manager. Do not disable IPv6 on the Manager machine, even if your systems do not use it. 2.3.2. Firewall Requirements for DNS, NTP, and IPMI Fencing The firewall requirements for all of the following topics are special cases that require individual consideration. DNS and NTP Red Hat Virtualization does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers. Important The Red Hat Virtualization Manager and all hosts (Red Hat Virtualization Host and Red Hat Enterprise Linux host) must have a fully qualified domain name and full, perfectly-aligned forward and reverse name resolution. Running a DNS service as a virtual machine in the Red Hat Virtualization environment is not supported. All DNS services the Red Hat Virtualization environment uses must be hosted outside of the environment. Use DNS instead of the /etc/hosts file for name resolution. Using a hosts file typically requires more work and has a greater chance for errors. IPMI and Other Fencing Mechanisms (optional) For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers. Each Red Hat Virtualization Host and Red Hat Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error... ) and cannot function as hosts, they must be able to connect to other hosts in the data center. The specific port number depends on the type of the fence agent you are using and how it is configured. The firewall requirement tables in the following sections do not represent this option. 2.3.3. Red Hat Virtualization Manager Firewall Requirements The Red Hat Virtualization Manager requires that a number of ports be opened to allow network traffic through the system's firewall. The engine-setup script can configure the firewall automatically. The firewall configuration documented here assumes a default configuration. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.3. Red Hat Virtualization Manager Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default M1 - ICMP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Optional. May help in diagnosis. No M2 22 TCP System(s) used for maintenance of the Manager including backend configuration, and software upgrades. Red Hat Virtualization Manager Secure Shell (SSH) access. Optional. Yes M3 2222 TCP Clients accessing virtual machine serial consoles. Red Hat Virtualization Manager Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes M4 80, 443 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts REST API clients Red Hat Virtualization Manager Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access to the Manager. HTTP redirects connections to HTTPS. Yes M5 6100 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Manager Provides websocket proxy access for a web-based console client, noVNC , when the websocket proxy is running on the Manager. No M6 7410 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Manager. See fence_kdump Advanced Configuration . fence_kdump doesn't provide a way to encrypt the connection. However, you can manually configure this port to block access from hosts that are not eligible. No M7 54323 TCP Administration Portal clients Red Hat Virtualization Manager ( ovirt-imageio service) Required for communication with the ovirt-imageo service. Yes M8 6642 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Open Virtual Network (OVN) southbound database Connect to Open Virtual Network (OVN) database Yes M9 9696 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Networking API Yes, with configuration generated by engine-setup. M10 35357 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Identity API Yes, with configuration generated by engine-setup. M11 53 TCP, UDP Red Hat Virtualization Manager DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. Open by default. No M12 123 UDP Red Hat Virtualization Manager NTP Server NTP requests from ports above 1023 to port 123, and responses. Open by default. No Note A port for the OVN northbound database (6641) is not listed because, in the default configuration, the only client for the OVN northbound database (6641) is ovirt-provider-ovn . Because they both run on the same host, their communication is not visible to the network. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Manager to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.4. Host Firewall Requirements Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts (RHVH) require a number of ports to be opened to allow network traffic through the system's firewall. The firewall rules are automatically configured by default when adding a new host to the Manager, overwriting any pre-existing firewall configuration. To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters . To customize the host firewall rules, see RHV: How to customize the Host's firewall rules? . Note A diagram of these firewall requirements is available at Red Hat Virtualization: Firewall Requirements Diagram . You can use the IDs in the table to look up connections in the diagram. Table 2.4. Virtualization Host Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default H1 22 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access. Optional. Yes H2 2223 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes H3 161 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the host to one or more external SNMP managers. Optional. No H4 111 TCP NFS storage server Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NFS connections. Optional. No H5 5900 - 6923 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines. Yes (optional) H6 5989 TCP, UDP Common Information Model Object Manager (CIMOM) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the host. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment. Optional. No H7 9090 TCP Red Hat Virtualization Manager Client machines Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required to access the Cockpit web interface, if installed. Yes H8 16514 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration using libvirt . Yes H9 49152 - 49215 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration and fencing using VDSM. These ports must be open to facilitate both automated and manual migration of virtual machines. Yes. Depending on agent for fencing, migration is done through libvirt. H10 54321 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts VDSM communications with the Manager and other virtualization hosts. Yes H11 54322 TCP Red Hat Virtualization Manager ovirt-imageio service Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required for communication with the ovirt-imageo service. Yes H12 6081 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required, when Open Virtual Network (OVN) is used as a network provider, to allow OVN to create tunnels between hosts. No H13 53 TCP, UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. This port is required and open by default. No H14 123 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NTP Server NTP requests from ports above 1023 to port 123, and responses. This port is required and open by default. H15 4500 TCP, UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H16 500 UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H17 - AH, ESP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes Note By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.5. Database Server Firewall Requirements Red Hat Virtualization supports the use of a remote database server for the Manager database ( engine ) and the Data Warehouse database ( ovirt-engine-history ). If you plan to use a remote database server, it must allow connections from the Manager and the Data Warehouse service (which can be separate from the Manager). Similarly, if you plan to access a local or remote Data Warehouse database from an external system, the database must allow connections from that system. Important Accessing the Manager database from external systems is not supported. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.5. Database Server Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default D1 5432 TCP, UDP Red Hat Virtualization Manager Data Warehouse service Manager ( engine ) database server Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. No, but can be enabled . D2 5432 TCP, UDP External systems Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. Disabled by default. No, but can be enabled . 2.3.6. Maximum Transmission Unit Requirements The recommended Maximum Transmission Units (MTU) setting for Hosts during deployment is 1500. It is possible to update this setting after the environment is set up to a different MTU. For more information on changing the MTU setting, see How to change the Hosted Engine VM network MTU . | [
"grep -E 'svm|vmx' /proc/cpuinfo | grep nx"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/rhv_requirements |
19.6.2. Online Documentation | 19.6.2. Online Documentation How to configure postfix with TLS? - A Red Hat Knowledgebase article that describes configuring postfix to use TLS. The Red Hat Knowledgebase article How to Configure a System to Manage Multiple Virtual Mailboxes Using Postfix and Dovecot describes managing multiple virtual users under one real-user account using Postfix as Mail Transporting Agent (MTA) and Dovecot as IMAP server. http://www.sendmail.org/ - Offers a thorough technical breakdown of Sendmail features, documentation and configuration examples. http://www.sendmail.com/ - Contains news, interviews and articles concerning Sendmail, including an expanded view of the many options available. http://www.postfix.org/ - The Postfix project home page contains a wealth of information about Postfix. The mailing list is a particularly good place to look for information. http://www.fetchmail.info/fetchmail-FAQ.html - A thorough FAQ about Fetchmail. http://www.procmail.org/ - The home page for Procmail with links to assorted mailing lists dedicated to Procmail as well as various FAQ documents. http://www.spamassassin.org/ - The official site of the SpamAssassin project. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-email-online-documentation |
Chapter 2. Planning a Distributed Compute Node (DCN) deployment | Chapter 2. Planning a Distributed Compute Node (DCN) deployment When you plan your DCN architecture, check that the technologies that you need are available and supported. 2.1. Considerations for storage on DCN architecture The following features are not currently supported for DCN architectures: Copying a volume snapshot between edge sites. You can work around this by creating an image from the volume and using glance to copy the image. After the image is copied, you can create a volume from it. Ceph Rados Gateway (RGW) at the edge. CephFS at the edge. Instance high availability (HA) at the edge sites. RBD mirroring between sites. Instance migration, live or cold, either between edge sites, or from the central location to edge sites. You can still migrate instances within a site boundary. To move an image between sites, you must snapshot the image, and use glance image-import . For more information see Confirming image snapshots can be created and copied between sites . Additionally, you must consider the following: You must upload images to the central location before copying them to edge sites; a copy of each image must exist in the Image service (glance) at the central location. You must use the RBD storage driver for the Image, Compute and Block Storage services. For each site, assign a unique availability zone, and use the same value for the NovaComputeAvailabilityZone and CinderStorageAvailabilityZone parameters. You can migrate an offline volume from an edge site to the central location, or vice versa. You cannot migrate volumes directly between edge sites. 2.2. Considerations for networking on DCN architecture The following features are not currently supported for DCN architectures: Octavia DHCP on DPDK nodes Conntrack for TC Flower Hardware Offload Conntrack for TC Flower Hardware Offload is available on DCN as a Technology Preview, and therefore using these solutions together is not fully supported by Red Hat. This feature should only be used with DCN for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details. The following ML2/OVS technologies are fully supported: OVS-DPDK without DHCP on the DPDK nodes SR-IOV TC flower hardware offload, without conntrack Neutron availability zones (AZs) with networker nodes at the edge, with one AZ per site Routed provider networks The following ML2/OVN networking technologies are fully supported: OVS-DPDK without DHCP on the DPDK nodes SR-IOV (without DHCP) TC flower hardware offload, without conntrack Routed provider networks OVN GW (networker node) with Neutron AZs supported Additionally, you must consider the following: Network latency: Balance the latency as measured in round-trip time (RTT), with the expected number of concurrent API operations to maintain acceptable performance. Maximum TCP/IP throughput is inversely proportional to RTT. You can mitigate some issues with high-latency connections with high bandwidth by tuning kernel TCP parameters.Contact Red Hat support if a cross-site communication exceeds 100 ms. Network drop outs: If the edge site temporarily loses connection to the central site, then no OpenStack control plane API or CLI operations can be executed at the impacted edge site for the duration of the outage. For example, Compute nodes at the edge site are consequently unable to create a snapshot of an instance, issue an auth token, or delete an image. General OpenStack control plane API and CLI operations remain functional during this outage, and can continue to serve any other edge sites that have a working connection. Image type: You must use raw images when deploying a DCN architecture with Ceph storage. Image sizing: Overcloud node images - overcloud node images are downloaded from the central undercloud node. These images are potentially large files that will be transferred across all necessary networks from the central site to the edge site during provisioning. Instance images: If there is no block storage at the edge, then the Image service images traverse the WAN during first use. The images are copied or cached locally to the target edge nodes for all subsequent use. There is no size limit for glance images. Transfer times vary with available bandwidth and network latency. If there is block storage at the edge, then the image is copied over the WAN asynchronously for faster boot times at the edge. Provider networks: This is the recommended networking approach for DCN deployments. If you use provider networks at remote sites, then you must consider that the Networking service (neutron) does not place any limits or checks on where you can attach available networks. For example, if you use a provider network only in edge site A, you must ensure that you do not try to attach to the provider network in edge site B. This is because there are no validation checks on the provider network when binding it to a Compute node. Site-specific networks: A limitation in DCN networking arises if you use networks that are specific to a certain site: When you deploy centralized neutron controllers with Compute nodes, there are no triggers in neutron to identify a certain Compute node as a remote node. Consequently, the Compute nodes receive a list of other Compute nodes and automatically form tunnels between each other; the tunnels are formed from edge to edge through the central site. If you use VXLAN or Geneve, every Compute node at every site forms a tunnel with every other Compute node and Controller node, whether or not they are local or remote. This is not an issue if you are using the same neutron networks everywhere. When you use VLANs, neutron expects that all Compute nodes have the same bridge mappings, and that all VLANs are available at every site. Additional sites: If you need to expand from a central site to additional remote sites, you can use the openstack CLI on Red Hat OpenStack Platform director to add new network segments and subnets. If edge servers are not pre-provisioned, you must configure DHCP relay for introspection and provisioning on routed segments. Routing must be configured either on the cloud or within the networking infrastructure that connects each edge site to the hub. You should implement a networking design that allocates an L3 subnet for each Red Hat OpenStack Platform cluster network (external, internal API, and so on), unique to each site. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/distributed_compute_node_and_storage_deployment/planning_a_distributed_compute_node_dcn_deployment |
Chapter 10. Network configuration | Chapter 10. Network configuration The following sections describe the basics of network configuration with the Assisted Installer. 10.1. Cluster networking There are various network types and addresses used by OpenShift and listed in the following table. Important IPv6 is not currently supported in the following configurations: Single stack Primary within dual stack Type DNS Description clusterNetwork The IP address pools from which pod IP addresses are allocated. serviceNetwork The IP address pool for services. machineNetwork The IP address blocks for machines forming the cluster. apiVIP api.<clustername.clusterdomain> The VIP to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address. apiVIPs api.<clustername.clusterdomain> The VIPs to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the apiVIP setting. ingressVIP *.apps.<clustername.clusterdomain> The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address. ingressVIPs *.apps.<clustername.clusterdomain> The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the ingressVIP setting. Note OpenShift Container Platform 4.12 introduces the new apiVIPs and ingressVIPs settings to accept many IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP and IngressVIP , but you must set both the new and old settings when modifying the configuration by using the API. Currently, the Assisted Service can deploy OpenShift Container Platform clusters by using one of the following configurations: IPv4 Dual-stack (IPv4 + IPv6 with IPv4 as primary) Note OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but not for OpenShift Container Platform 4.15 and later releases. 10.1.1. Limitations 10.1.1.1. SDN The SDN controller is not supported with single-node OpenShift. The SDN controller does not support dual-stack networking. The SDN controller is not supported for OpenShift Container Platform 4.15 and later releases. For more information, see Deprecation of the OpenShift SDN network plugin in the OpenShift Container Platform release notes. 10.1.1.2. OVN-Kubernetes For more information, see About the OVN-Kubernetes network plugin . 10.1.2. Cluster network The cluster network is a network from which every pod deployed in the cluster gets its IP address. Given that the workload might live across many nodes forming the cluster, it is important for the network provider to be able to easily find an individual node based on the pod's IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix . The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster might assign addresses for the multi-node cluster: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Creating a 3-node cluster by using this snippet might create the following network topology: Pods scheduled in node #1 get IPs from 10.128.0.0/23 Pods scheduled in node #2 get IPs from 10.128.2.0/23 Pods scheduled in node #3 get IPs from 10.128.4.0/23 Explaining OVN-Kubernetes internals is out of scope for this document, but the pattern previously described provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes. 10.1.3. Machine network The machine network is a network used by all the hosts forming the cluster to communicate with each other. This is also the subnet that must include the API and Ingress VIPs. For iSCSI boot volumes, the hosts are connected over two machine networks: one designated for the OpenShift Container Platform installation and the other for iSCSI traffic. During the installation process, ensure that you specify the OpenShift Container Platform network. Using the iSCSI network will result in an error for the host. 10.1.4. Single-node OpenShift compared to multi-node cluster Depending on whether you are deploying single-node OpenShift or a multi-node cluster, different values are mandatory. The following table explains this in more detail. Parameter Single-node OpenShift Multi-node cluster with DHCP mode Multi-node cluster without DHCP mode clusterNetwork Required Required Required serviceNetwork Required Required Required machineNetwork Auto-assign possible (*) Auto-assign possible (*) Auto-assign possible (*) apiVIP Forbidden Forbidden Required apiVIPs Forbidden Forbidden Required in 4.12 and later releases ingressVIP Forbidden Forbidden Required ingressVIPs Forbidden Forbidden Required in 4.12 and later releases (*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly. 10.1.5. Air-gapped environments The workflow for deploying a cluster without Internet access has some prerequisites, which are out of scope of this document. You can consult the Zero Touch Provisioning the hard way Git repository for some insights. 10.2. VIP DHCP allocation The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server. If you enable the feature, instead of using api_vips and ingress_vips from the cluster configuration, the service will send a lease allocation request and based on the reply it will use VIPs accordingly. The service will allocate the IP addresses from the Machine Network. Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier. Important VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later. 10.2.1. Example payload to enable autoallocation { "vip_dhcp_allocation": true, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ], "machine_networks": [ { "cidr": "192.168.127.0/24" } ] } 10.2.2. Example payload to disable autoallocation { "api_vips": [ { "ip": "192.168.127.100" } ], "ingress_vips": [ { "ip": "192.168.127.101" } ], "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ] } 10.3. Additional resources Bare metal IPI documentation provides additional explanation of the syntax for the VIP addresses. 10.4. Understanding differences between user- and cluster-managed networking User managed networking is a feature in the Assisted Installer that allows customers with non-standard network topologies to deploy OpenShift Container Platform clusters. Examples include: Customers with an external load balancer who do not want to use keepalived and VRRP for handling VIP addressses. Deployments with cluster nodes distributed across many distinct L2 network segments. 10.4.1. Validations There are various network validations happening in the Assisted Installer before it allows the installation to start. When you enable User Managed Networking, the following validations change: The L3 connectivity check (ICMP) is performed instead of the L2 check (ARP). The MTU validation verifies the maximum transmission unit (MTU) value for all interfaces and not only for the machine network. 10.5. Static network configuration You may use static network configurations when generating or updating the discovery ISO. 10.5.1. Prerequisites You are familiar with NMState . 10.5.2. NMState configuration The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time. 10.5.2.1. Example of NMState configuration dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.126.1 -hop-interface: eth0 table-id: 254 10.5.3. MAC interface mapping MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host. The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces. 10.5.3.1. Example of MAC interface mapping mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ] 10.5.4. Additional NMState configuration examples The following examples are only meant to show a partial configuration. They are not meant for use as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they can leave your machines with no network connectivity. 10.5.4.1. Tagged VLAN interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 reorder-headers: true 10.5.4.2. Network bond interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: miimon: "140" port: - eth0 - eth1 name: bond0 state: up type: bond 10.6. Applying a static network configuration with the API You can apply a static network configuration by using the Assisted Installer API. Important A static IP configuration is not supported in the following scenarios: OpenShift Container Platform installations on Oracle Cloud Infrastructure. OpenShift Container Platform installations on iSCSI boot volumes. Prerequisites You have created an infrastructure environment using the API or have created a cluster using the web console. You have your infrastructure environment ID exported in your shell as USDINFRA_ENV_ID . You have credentials to use when accessing the API and have exported a token as USDAPI_TOKEN in your shell. You have YAML files with a static network configuration available as server-a.yaml and server-b.yaml . Procedure Create a temporary file /tmp/request-body.txt with the API request: jq -n --arg NMSTATE_YAML1 "USD(cat server-a.yaml)" --arg NMSTATE_YAML2 "USD(cat server-b.yaml)" \ '{ "static_network_config": [ { "network_yaml": USDNMSTATE_YAML1, "mac_interface_map": [{"mac_address": "02:00:00:2c:23:a5", "logical_nic_name": "eth0"}, {"mac_address": "02:00:00:68:73:dc", "logical_nic_name": "eth1"}] }, { "network_yaml": USDNMSTATE_YAML2, "mac_interface_map": [{"mac_address": "02:00:00:9f:85:eb", "logical_nic_name": "eth1"}, {"mac_address": "02:00:00:c8:be:9b", "logical_nic_name": "eth0"}] } ] }' >> /tmp/request-body.txt Refresh the API token: USD source refresh-token Send the request to the Assisted Service API endpoint: USD curl -H "Content-Type: application/json" \ -X PATCH -d @/tmp/request-body.txt \ -H "Authorization: Bearer USD{API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID 10.7. Additional resources Applying a static network configuration with the web console 10.8. Converting to dual-stack networking Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets. 10.8.1. Prerequisites You are familiar with OVN-K8s documentation 10.8.2. Example payload for single-node OpenShift { "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } 10.8.3. Example payload for an OpenShift Container Platform cluster consisting of many nodes { "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "api_vips": [ { "ip": "192.168.127.100" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334" } ], "ingress_vips": [ { "ip": "192.168.127.101" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7335" } ], "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } 10.8.4. Limitations The api_vips IP address and ingress_vips IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values. 10.9. Additional resources Understanding OpenShift networking About the OpenShift SDN network plugin OVN-Kubernetes - CNI network provider Dual-stack Service configuration scenarios Installing a user-provisioned bare metal cluster with network customizations . Cluster Network Operator configuration object | [
"clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"{ \"vip_dhcp_allocation\": true, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ], \"machine_networks\": [ { \"cidr\": \"192.168.127.0/24\" } ] }",
"{ \"api_vips\": [ { \"ip\": \"192.168.127.100\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" } ], \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ] }",
"dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.126.1 next-hop-interface: eth0 table-id: 254",
"mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ]",
"interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 reorder-headers: true",
"interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: miimon: \"140\" port: - eth0 - eth1 name: bond0 state: up type: bond",
"jq -n --arg NMSTATE_YAML1 \"USD(cat server-a.yaml)\" --arg NMSTATE_YAML2 \"USD(cat server-b.yaml)\" '{ \"static_network_config\": [ { \"network_yaml\": USDNMSTATE_YAML1, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:2c:23:a5\", \"logical_nic_name\": \"eth0\"}, {\"mac_address\": \"02:00:00:68:73:dc\", \"logical_nic_name\": \"eth1\"}] }, { \"network_yaml\": USDNMSTATE_YAML2, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:9f:85:eb\", \"logical_nic_name\": \"eth1\"}, {\"mac_address\": \"02:00:00:c8:be:9b\", \"logical_nic_name\": \"eth0\"}] } ] }' >> /tmp/request-body.txt",
"source refresh-token",
"curl -H \"Content-Type: application/json\" -X PATCH -d @/tmp/request-body.txt -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID",
"{ \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] }",
"{ \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"api_vips\": [ { \"ip\": \"192.168.127.100\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7334\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7335\" } ], \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] }"
] | https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2025/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_network-configuration |
Chapter 4. Intel Gaudi AI Accelerator integration | Chapter 4. Intel Gaudi AI Accelerator integration To accelerate your high-performance deep learning models, you can integrate Intel Gaudi AI accelerators into OpenShift AI. This integration enables your data scientists to use Gaudi libraries and software associated with Intel Gaudi AI accelerators through custom-configured workbench instances. Intel Gaudi AI accelerators offer optimized performance for deep learning workloads, with the latest Gaudi 3 devices providing significant improvements in training speed and energy efficiency. These accelerators are suitable for enterprises running machine learning and AI applications on OpenShift AI. Before you can enable Intel Gaudi AI accelerators in OpenShift AI, you must complete the following steps: Install the latest version of the Intel Gaudi AI Accelerator Operator from OperatorHub. Create and configure a custom workbench image for Intel Gaudi AI accelerators. A prebuilt workbench image for Gaudi accelerators is not included in OpenShift AI. Manually define and configure an accelerator profile for each Intel Gaudi AI device in your environment. OpenShift AI supports Intel Gaudi devices up to Intel Gaudi 3. The Intel Gaudi 3 accelerators, in particular, offer the following benefits: Improved training throughput: Reduce the time required to train large models by using advanced tensor processing cores and increased memory bandwidth. Energy efficiency: Lower power consumption while maintaining high performance, reducing operational costs for large-scale deployments. Scalable architecture: Scale across multiple nodes for distributed training configurations. Your OpenShift platform must support EC2 DL1 instances to use Intel Gaudi AI accelerators in an Amazon EC2 DL1 instance. You can use Intel Gaudi AI accelerators in workbench instances or model serving after you enable the accelerators, create a custom workbench image, and configure the accelerator profile. To identify the Intel Gaudi AI accelerators present in your deployment, use the lspci utility. For more information, see lspci(8) - Linux man page . Important The presence of Intel Gaudi AI accelerators in your deployment, as indicated by the lspci utility, does not guarantee that the devices are ready to use. You must ensure that all installation and configuration steps are completed successfully. Additional resources lspci(8) - Linux man page Amazon EC2 DL1 Instances Intel Gaudi AI Operator OpenShift installation What version of the Kubernetes API is included with each OpenShift 4.x release? 4.1. Enabling Intel Gaudi AI accelerators Before you can use Intel Gaudi AI accelerators in OpenShift AI, you must install the required dependencies, deploy the Intel Gaudi AI Accelerator Operator, and configure the environment. Prerequisites You have logged in to OpenShift. You have the cluster-admin role in OpenShift. You have installed your Intel Gaudi accelerator and confirmed that it is detected in your environment. Your OpenShift environment supports EC2 DL1 instances if you are running on Amazon Web Services (AWS). You have installed the OpenShift command-line interface (CLI). Procedure Install the latest version of the Intel Gaudi AI Accelerator Operator, as described in Intel Gaudi AI Operator OpenShift installation . By default, OpenShift sets a per-pod PID limit of 4096. If your workload requires more processing power, such as when you use multiple Gaudi accelerators or when using vLLM with Ray, you must manually increase the per-pod PID limit to avoid Resource temporarily unavailable errors. These errors occur due to PID exhaustion. Red Hat recommends setting this limit to 32768, although values over 20000 are sufficient. Run the following command to label the node: Optional: To prevent workload distribution on the affected node, you can mark the node as unschedulable and then drain it in preparation for maintenance. For more information, see Understanding how to evacuate pods on nodes . Create a custom-kubelet-pidslimit.yaml KubeletConfig resource file: Populate the file with the following YAML code. Set the PodPidsLimit value to 32768: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet-pidslimit spec: kubeletConfig: PodPidsLimit: 32768 machineConfigPoolSelector: matchLabels: custom-kubelet: set-pod-pid-limit-kubelet Apply the configuration: This operation causes the node to reboot. For more information, see Understanding node rebooting . Optional: If you previously marked the node as unschedulable, you can allow scheduling again after the node reboots. Create a custom workbench image for Intel Gaudi AI accelerators, as described in Creating custom workbench images . After installing the Intel Gaudi AI Accelerator Operator, create an accelerator profile, as described in Working with accelerator profiles . Verification From the Administrator perspective, go to the Operators Installed Operators page. Confirm that the following Operators appear: Intel Gaudi AI Accelerator Node Feature Discovery (NFD) Kernel Module Management (KMM) | [
"label node <node_name> custom-kubelet=set-pod-pid-limit-kubelet",
"create -f custom-kubelet-pidslimit.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet-pidslimit spec: kubeletConfig: PodPidsLimit: 32768 machineConfigPoolSelector: matchLabels: custom-kubelet: set-pod-pid-limit-kubelet",
"apply -f custom-kubelet-pidslimit.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_accelerators/intel-gaudi-ai-accelerator-integration_accelerators |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.12/proc-providing-feedback-on-redhat-documentation |
Chapter 23. Vaults in IdM | Chapter 23. Vaults in IdM This chapter describes vaults in Identity Management (IdM). It introduces the following topics: The concept of the vault . The different roles associated with a vault . The different types of vaults available in IdM based on the level of security and access control . The different types of vaults available in IdM based on ownership . The concept of vault containers . The basic commands for managing vaults in IdM . Installing the key recovery authority (KRA), which is a prerequisite for using vaults in IdM . 23.1. Vaults and their benefits A vault is a useful feature for those Identity Management (IdM) users who want to keep all their sensitive data stored securely but conveniently in one place. There are various types of vaults and you should choose which vault to use based on your requirements. A vault is a secure location in (IdM) for storing, retrieving, sharing, and recovering a secret. A secret is security-sensitive data, usually authentication credentials, that only a limited group of people or entities can access. For example, secrets include: Passwords PINs Private SSH keys A vault is comparable to a password manager. Just like a password manager, a vault typically requires a user to generate and remember one primary password to unlock and access any information stored in the vault. However, a user can also decide to have a standard vault. A standard vault does not require the user to enter any password to access the secrets stored in the vault. Note The purpose of vaults in IdM is to store authentication credentials that allow you to authenticate to external, non-IdM-related services. Other important characteristics of the IdM vaults are: Vaults are only accessible to the vault owner and those IdM users that the vault owner selects to be the vault members. In addition, the IdM administrator has access to the vault. If a user does not have sufficient privileges to create a vault, an IdM administrator can create the vault and set the user as its owner. Users and services can access the secrets stored in a vault from any machine enrolled in the IdM domain. One vault can only contain one secret, for example, one file. However, the file itself can contain multiple secrets such as passwords, keytabs or certificates. Note Vault is only available from the IdM command line (CLI), not from the IdM Web UI. 23.2. Vault owners, members, and administrators Identity Management (IdM) distinguishes the following vault user types: Vault owner A vault owner is a user or service with basic management privileges on the vault. For example, a vault owner can modify the properties of the vault or add new vault members. Each vault must have at least one owner. A vault can also have multiple owners. Vault member A vault member is a user or service that can access a vault created by another user or service. Vault administrator Vault administrators have unrestricted access to all vaults and are allowed to perform all vault operations. Note Symmetric and asymmetric vaults are protected with a password or key and apply special access control rules (see Vault types ). The administrator must meet these rules to: Access secrets in symmetric and asymmetric vaults. Change or reset the vault password or key. A vault administrator is any user with the Vault Administrators privilege. In the context of the role-based access control (RBAC) in IdM, a privilege is a group of permissions that you can apply to a role. Vault User The vault user represents the user in whose container the vault is located. The Vault user information is displayed in the output of specific commands, such as ipa vault-show : For details on vault containers and user vaults, see Vault containers . Additional resources See Standard, symmetric and asymmetric vaults for details on vault types. 23.3. Standard, symmetric, and asymmetric vaults Based on the level of security and access control, IdM classifies vaults into the following types: Standard vaults Vault owners and vault members can archive and retrieve the secrets without having to use a password or key. Symmetric vaults Secrets in the vault are protected with a symmetric key. Vault owners and members can archive and retrieve the secrets, but they must provide the vault password. Asymmetric vaults Secrets in the vault are protected with an asymmetric key. Users archive the secret using a public key and retrieve it using a private key. Vault members can only archive secrets, while vault owners can do both, archive and retrieve secrets. 23.4. User, service, and shared vaults Based on ownership, IdM classifies vaults into several types. The table below contains information about each type, its owner and use. Table 23.1. IdM vaults based on ownership Type Description Owner Note User vault A private vault for a user A single user Any user can own one or more user vaults if allowed by IdM administrator Service vault A private vault for a service A single service Any service can own one or more user vaults if allowed by IdM administrator Shared vault A vault shared by multiple users and services The vault administrator who created the vault Users and services can own one or more user vaults if allowed by IdM administrator. The vault administrators other than the one that created the vault also have full access to the vault. 23.5. Vault containers A vault container is a collection of vaults. The table below lists the default vault containers that Identity Management (IdM) provides. Table 23.2. Default vault containers in IdM Type Description Purpose User container A private container for a user Stores user vaults for a particular user Service container A private container for a service Stores service vaults for a particular service Shared container A container for multiple users and services Stores vaults that can be shared by multiple users or services IdM creates user and service containers for each user or service automatically when the first private vault for the user or service is created. After the user or service is deleted, IdM removes the container and its contents. 23.6. Basic IdM vault commands You can use the basic commands outlined below to manage Identity Management (IdM) vaults. The table below contains a list of ipa vault-* commands with the explanation of their purpose. Note Before running any ipa vault-* command, install the Key Recovery Authority (KRA) certificate system component on one or more of the servers in your IdM domain. For details, see Installing the Key Recovery Authority in IdM . Table 23.3. Basic IdM vault commands with explanations Command Purpose ipa help vault Displays conceptual information about IdM vaults and sample vault commands. ipa vault-add --help , ipa vault-find --help Adding the --help option to a specific ipa vault-* command displays the options and detailed help available for that command. ipa vault-show user_vault --user idm_user When accessing a vault as a vault member, you must specify the vault owner. If you do not specify the vault owner, IdM informs you that it did not find the vault: ipa vault-show shared_vault --shared When accessing a shared vault, you must specify that the vault you want to access is a shared vault. Otherwise, IdM informs you it did not find the vault: 23.7. Installing the Key Recovery Authority in IdM Follow this procedure to enable vaults in Identity Management (IdM) by installing the Key Recovery Authority (KRA) Certificate System (CS) component on a specific IdM server. Prerequisites You are logged in as root on the IdM server. An IdM certificate authority is installed on the IdM server. You have the Directory Manager credentials. Procedure Install the KRA: Important You can install the first KRA of an IdM cluster on a hidden replica. However, installing additional KRAs requires temporarily activating the hidden replica before you install the KRA clone on a non-hidden replica. Then you can hide the originally hidden replica again. Note To make the vault service highly available and resilient, install the KRA on two IdM servers or more. Maintaining multiple KRA servers prevents data loss. Additional resources Demoting or promoting hidden replicas The hidden replica mode | [
"ipa vault-show my_vault Vault name: my_vault Type: standard Owner users: user Vault user: user",
"[admin@server ~]USD ipa vault-show user_vault ipa: ERROR: user_vault: vault not found",
"[admin@server ~]USD ipa vault-show shared_vault ipa: ERROR: shared_vault: vault not found",
"ipa-kra-install"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_ansible_to_install_and_manage_identity_management/vaults-in-idm_using-ansible-to-install-and-manage-idm |
Chapter 10. Optimizing routing | Chapter 10. Optimizing routing The OpenShift Container Platform HAProxy router can be scaled or configured to optimize performance. 10.1. Baseline Ingress Controller (router) performance The OpenShift Container Platform Ingress Controller, or router, is the ingress point for ingress traffic for applications and services that are configured using routes and ingresses. When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular: HTTP keep-alive/close mode Route type TLS session resumption client support Number of concurrent connections per target route Number of target routes Back end server page size Underlying infrastructure (network/SDN solution, CPU, and so on) While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second. In HTTP keep-alive mode scenarios: Encryption LoadBalancerService HostNetwork none 21515 29622 edge 16743 22913 passthrough 36786 53295 re-encrypt 21583 25198 In HTTP close (no keep-alive) scenarios: Encryption LoadBalancerService HostNetwork none 5719 8273 edge 2729 4069 passthrough 4121 5344 re-encrypt 2320 2941 The default Ingress Controller configuration was used with the spec.tuningOptions.threadCount field set to 4 . Two different endpoint publishing strategies were tested: Load Balancer Service and Host Network. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating a 1 Gbit NIC at page sizes as small as 8 kB. When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router: Number of applications Application type 5-10 static file/web server or caching proxy 100-1000 applications generating dynamic content In general, HAProxy can support routes for up to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content. Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier. For more information on Ingress sharding, see Configuring Ingress Controller sharding by using route labels and Configuring Ingress Controller sharding by using namespace labels . You can modify the Ingress Controller deployment using the information provided in Setting Ingress Controller thread count for threads and Ingress Controller configuration parameters for timeouts, and other tuning configurations in the Ingress Controller specification. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/scalability_and_performance/routing-optimization |
Chapter 2. Node [v1] | Chapter 2. Node [v1] Description Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd). Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object NodeSpec describes the attributes that a node is created with. status object NodeStatus is information about the current status of a node. 2.1.1. .spec Description NodeSpec describes the attributes that a node is created with. Type object Property Type Description configSource object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 externalID string Deprecated. Not all kubelets will set this field. Remove field after 1.13. see: https://issues.k8s.io/61966 podCIDR string PodCIDR represents the pod IP range assigned to the node. podCIDRs array (string) podCIDRs represents the IP ranges assigned to the node for usage by Pods on that node. If this field is specified, the 0th entry must match the podCIDR field. It may contain at most 1 value for each of IPv4 and IPv6. providerID string ID of the node assigned by the cloud provider in the format: <ProviderName>://<ProviderSpecificNodeID> taints array If specified, the node's taints. taints[] object The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. unschedulable boolean Unschedulable controls node schedulability of new pods. By default, node is schedulable. More info: https://kubernetes.io/docs/concepts/nodes/node/#manual-node-administration 2.1.2. .spec.configSource Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.3. .spec.configSource.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.4. .spec.taints Description If specified, the node's taints. Type array 2.1.5. .spec.taints[] Description The node this Taint is attached to has the "effect" on any pod that does not tolerate the Taint. Type object Required key effect Property Type Description effect string Required. The effect of the taint on pods that do not tolerate the taint. Valid effects are NoSchedule, PreferNoSchedule and NoExecute. Possible enum values: - "NoExecute" Evict any already-running pods that do not tolerate the taint. Currently enforced by NodeController. - "NoSchedule" Do not allow new pods to schedule onto the node unless they tolerate the taint, but allow all pods submitted to Kubelet without going through the scheduler to start, and allow all already-running pods to continue running. Enforced by the scheduler. - "PreferNoSchedule" Like TaintEffectNoSchedule, but the scheduler tries not to schedule new pods onto the node, rather than prohibiting new pods from scheduling onto the node entirely. Enforced by the scheduler. key string Required. The taint key to be applied to a node. timeAdded Time TimeAdded represents the time at which the taint was added. It is only written for NoExecute taints. value string The taint value corresponding to the taint key. 2.1.6. .status Description NodeStatus is information about the current status of a node. Type object Property Type Description addresses array List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP). addresses[] object NodeAddress contains information for the node's address. allocatable object (Quantity) Allocatable represents the resources of a node that are available for scheduling. Defaults to Capacity. capacity object (Quantity) Capacity represents the total resources of a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#capacity conditions array Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition conditions[] object NodeCondition contains condition information for a node. config object NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. daemonEndpoints object NodeDaemonEndpoints lists ports opened by daemons running on the Node. images array List of container images on this node images[] object Describe a container image nodeInfo object NodeSystemInfo is a set of ids/uuids to uniquely identify the node. phase string NodePhase is the recently observed lifecycle phase of the node. More info: https://kubernetes.io/docs/concepts/nodes/node/#phase The field is never populated, and now is deprecated. Possible enum values: - "Pending" means the node has been created/added by the system, but not configured. - "Running" means the node has been configured and has Kubernetes components running. - "Terminated" means the node has been removed from the cluster. runtimeHandlers array The available runtime handlers. runtimeHandlers[] object NodeRuntimeHandler is a set of runtime handler information. volumesAttached array List of volumes that are attached to the node. volumesAttached[] object AttachedVolume describes a volume attached to a node volumesInUse array (string) List of attachable volumes in use (mounted) by the node. 2.1.7. .status.addresses Description List of addresses reachable to the node. Queried from cloud provider, if available. More info: https://kubernetes.io/docs/concepts/nodes/node/#addresses Note: This field is declared as mergeable, but the merge key is not sufficiently unique, which can cause data corruption when it is merged. Callers should instead use a full-replacement patch. See https://pr.k8s.io/79391 for an example. Consumers should assume that addresses can change during the lifetime of a Node. However, there are some exceptions where this may not be possible, such as Pods that inherit a Node's address in its own status or consumers of the downward API (status.hostIP). Type array 2.1.8. .status.addresses[] Description NodeAddress contains information for the node's address. Type object Required type address Property Type Description address string The node address. type string Node address type, one of Hostname, ExternalIP or InternalIP. 2.1.9. .status.conditions Description Conditions is an array of current observed node conditions. More info: https://kubernetes.io/docs/concepts/nodes/node/#condition Type array 2.1.10. .status.conditions[] Description NodeCondition contains condition information for a node. Type object Required type status Property Type Description lastHeartbeatTime Time Last time we got an update on a given condition. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of node condition. 2.1.11. .status.config Description NodeConfigStatus describes the status of the config assigned by Node.Spec.ConfigSource. Type object Property Type Description active object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 assigned object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 error string Error describes any problems reconciling the Spec.ConfigSource to the Active config. Errors may occur, for example, attempting to checkpoint Spec.ConfigSource to the local Assigned record, attempting to checkpoint the payload associated with Spec.ConfigSource, attempting to load or validate the Assigned config, etc. Errors may occur at different points while syncing config. Earlier errors (e.g. download or checkpointing errors) will not result in a rollback to LastKnownGood, and may resolve across Kubelet retries. Later errors (e.g. loading or validating a checkpointed config) will result in a rollback to LastKnownGood. In the latter case, it is usually possible to resolve the error by fixing the config assigned in Spec.ConfigSource. You can find additional information for debugging by searching the error message in the Kubelet log. Error is a human-readable description of the error state; machines can check whether or not Error is empty, but should not rely on the stability of the Error text across Kubelet versions. lastKnownGood object NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 2.1.12. .status.config.active Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.13. .status.config.active.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.14. .status.config.assigned Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.15. .status.config.assigned.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.16. .status.config.lastKnownGood Description NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil. This API is deprecated since 1.22 Type object Property Type Description configMap object ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration 2.1.17. .status.config.lastKnownGood.configMap Description ConfigMapNodeConfigSource contains the information to reference a ConfigMap as a config source for the Node. This API is deprecated since 1.22: https://git.k8s.io/enhancements/keps/sig-node/281-dynamic-kubelet-configuration Type object Required namespace name kubeletConfigKey Property Type Description kubeletConfigKey string KubeletConfigKey declares which key of the referenced ConfigMap corresponds to the KubeletConfiguration structure This field is required in all cases. name string Name is the metadata.name of the referenced ConfigMap. This field is required in all cases. namespace string Namespace is the metadata.namespace of the referenced ConfigMap. This field is required in all cases. resourceVersion string ResourceVersion is the metadata.ResourceVersion of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. uid string UID is the metadata.UID of the referenced ConfigMap. This field is forbidden in Node.Spec, and required in Node.Status. 2.1.18. .status.daemonEndpoints Description NodeDaemonEndpoints lists ports opened by daemons running on the Node. Type object Property Type Description kubeletEndpoint object DaemonEndpoint contains information about a single Daemon endpoint. 2.1.19. .status.daemonEndpoints.kubeletEndpoint Description DaemonEndpoint contains information about a single Daemon endpoint. Type object Required Port Property Type Description Port integer Port number of the given endpoint. 2.1.20. .status.images Description List of container images on this node Type array 2.1.21. .status.images[] Description Describe a container image Type object Property Type Description names array (string) Names by which this image is known. e.g. ["kubernetes.example/hyperkube:v1.0.7", "cloud-vendor.registry.example/cloud-vendor/hyperkube:v1.0.7"] sizeBytes integer The size of the image in bytes. 2.1.22. .status.nodeInfo Description NodeSystemInfo is a set of ids/uuids to uniquely identify the node. Type object Required machineID systemUUID bootID kernelVersion osImage containerRuntimeVersion kubeletVersion kubeProxyVersion operatingSystem architecture Property Type Description architecture string The Architecture reported by the node bootID string Boot ID reported by the node. containerRuntimeVersion string ContainerRuntime Version reported by the node through runtime remote API (e.g. containerd://1.4.2). kernelVersion string Kernel Version reported by the node from 'uname -r' (e.g. 3.16.0-0.bpo.4-amd64). kubeProxyVersion string KubeProxy Version reported by the node. kubeletVersion string Kubelet Version reported by the node. machineID string MachineID reported by the node. For unique machine identification in the cluster this field is preferred. Learn more from man(5) machine-id: http://man7.org/linux/man-pages/man5/machine-id.5.html operatingSystem string The Operating System reported by the node osImage string OS Image reported by the node from /etc/os-release (e.g. Debian GNU/Linux 7 (wheezy)). systemUUID string SystemUUID reported by the node. For unique machine identification MachineID is preferred. This field is specific to Red Hat hosts https://access.redhat.com/documentation/en-us/red_hat_subscription_management/1/html/rhsm/uuid 2.1.23. .status.runtimeHandlers Description The available runtime handlers. Type array 2.1.24. .status.runtimeHandlers[] Description NodeRuntimeHandler is a set of runtime handler information. Type object Property Type Description features object NodeRuntimeHandlerFeatures is a set of runtime features. name string Runtime handler name. Empty for the default runtime handler. 2.1.25. .status.runtimeHandlers[].features Description NodeRuntimeHandlerFeatures is a set of runtime features. Type object Property Type Description recursiveReadOnlyMounts boolean RecursiveReadOnlyMounts is set to true if the runtime handler supports RecursiveReadOnlyMounts. 2.1.26. .status.volumesAttached Description List of volumes that are attached to the node. Type array 2.1.27. .status.volumesAttached[] Description AttachedVolume describes a volume attached to a node Type object Required name devicePath Property Type Description devicePath string DevicePath represents the device path where the volume should be available name string Name of the attached volume 2.2. API endpoints The following API endpoints are available: /api/v1/nodes DELETE : delete collection of Node GET : list or watch objects of kind Node POST : create a Node /api/v1/watch/nodes GET : watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. /api/v1/nodes/{name} DELETE : delete a Node GET : read the specified Node PATCH : partially update the specified Node PUT : replace the specified Node /api/v1/watch/nodes/{name} GET : watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /api/v1/nodes/{name}/status GET : read status of the specified Node PATCH : partially update status of the specified Node PUT : replace status of the specified Node 2.2.1. /api/v1/nodes HTTP method DELETE Description delete collection of Node Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Node Table 2.3. HTTP responses HTTP code Reponse body 200 - OK NodeList schema 401 - Unauthorized Empty HTTP method POST Description create a Node Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body Node schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 202 - Accepted Node schema 401 - Unauthorized Empty 2.2.2. /api/v1/watch/nodes HTTP method GET Description watch individual changes to a list of Node. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /api/v1/nodes/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the Node HTTP method DELETE Description delete a Node Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Node Table 2.11. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Node Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Node Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body Node schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty 2.2.4. /api/v1/watch/nodes/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the Node HTTP method GET Description watch changes to an object of kind Node. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /api/v1/nodes/{name}/status Table 2.19. Global path parameters Parameter Type Description name string name of the Node HTTP method GET Description read status of the specified Node Table 2.20. HTTP responses HTTP code Reponse body 200 - OK Node schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Node Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Node Table 2.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.24. Body parameters Parameter Type Description body Node schema Table 2.25. HTTP responses HTTP code Reponse body 200 - OK Node schema 201 - Created Node schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/node_apis/node-v1 |
Installing on Nutanix | Installing on Nutanix OpenShift Container Platform 4.12 Installing OpenShift Container Platform on Nutanix Red Hat OpenShift Documentation Team | [
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"ccoctl --help",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install create install-config --dir <installation_directory> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: nutanix: 7 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 metadata: creationTimestamp: null name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 10 ingressVIP: 10.40.142.8 11 prismCentral: endpoint: address: your.prismcentral.domainname 12 port: 9440 13 password: <password> 14 username: <username> 15 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 16 credentialsMode: Manual publish: External pullSecret: '{\"auths\": ...}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=nutanix --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api",
"0000_30_machine-api-operator_00_credentials-request.yaml 1",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1",
"openshift-install create manifests --dir <installation_directory> 1",
"cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests",
"ls ./<installation_directory>/manifests",
"total 64 -rw-r----- 1 <user> <user> 2335 Jul 8 12:22 cluster-config.yaml -rw-r----- 1 <user> <user> 161 Jul 8 12:22 cluster-dns-02-config.yml -rw-r----- 1 <user> <user> 864 Jul 8 12:22 cluster-infrastructure-02-config.yml -rw-r----- 1 <user> <user> 191 Jul 8 12:22 cluster-ingress-02-config.yml -rw-r----- 1 <user> <user> 9607 Jul 8 12:22 cluster-network-01-crd.yml -rw-r----- 1 <user> <user> 272 Jul 8 12:22 cluster-network-02-config.yml -rw-r----- 1 <user> <user> 142 Jul 8 12:22 cluster-proxy-01-config.yaml -rw-r----- 1 <user> <user> 171 Jul 8 12:22 cluster-scheduler-02-config.yml -rw-r----- 1 <user> <user> 200 Jul 8 12:22 cvo-overrides.yaml -rw-r----- 1 <user> <user> 118 Jul 8 12:22 kube-cloud-config.yaml -rw-r----- 1 <user> <user> 1304 Jul 8 12:22 kube-system-configmap-root-ca.yaml -rw-r----- 1 <user> <user> 4090 Jul 8 12:22 machine-config-server-tls-secret.yaml -rw-r----- 1 <user> <user> 3961 Jul 8 12:22 openshift-config-secret-pull-secret.yaml -rw------- 1 <user> <user> 283 Jul 8 12:24 openshift-machine-api-nutanix-credentials-credentials.yaml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"cp certs/lin/* /etc/pki/ca-trust/source/anchors",
"update-ca-trust extract",
"./openshift-install coreos print-stream-json",
"\"nutanix\": { \"release\": \"411.86.202210041459-0\", \"formats\": { \"qcow2\": { \"disk\": { \"location\": \"https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.11/411.86.202210041459-0/x86_64/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2\", \"sha256\": \"42e227cac6f11ac37ee8a2f9528bb3665146566890577fd55f9b950949e5a54b\"",
"platform: nutanix: clusterOSImage: http://example.com/images/rhcos-411.86.202210041459-0-nutanix.x86_64.qcow2",
"./openshift-install create install-config --dir <installation_directory> 1",
"platform: nutanix: clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2",
"pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'",
"additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----",
"imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 3 platform: nutanix: 4 cpus: 2 coresPerSocket: 2 memoryMiB: 8196 osDisk: diskSizeGiB: 120 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 platform: nutanix: 7 cpus: 4 coresPerSocket: 2 memoryMiB: 16384 osDisk: diskSizeGiB: 120 metadata: creationTimestamp: null name: test-cluster 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: nutanix: apiVIP: 10.40.142.7 10 ingressVIP: 10.40.142.8 11 prismCentral: endpoint: address: your.prismcentral.domainname 12 port: 9440 13 password: <password> 14 username: <username> 15 prismElements: - endpoint: address: your.prismelement.domainname port: 9440 uuid: 0005b0f1-8f43-a0f2-02b7-3cecef193712 subnetUUIDs: - c7938dc6-7659-453e-a688-e26020c68e43 clusterOSImage: http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 16 credentialsMode: Manual publish: External pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 17 fips: false 18 sshKey: ssh-ed25519 AAAA... 19 additionalTrustBundle: | 20 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 21 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"credentials: - type: basic_auth 1 data: prismCentral: 2 username: <username_for_prism_central> password: <password_for_prism_central> prismElements: 3 - name: <name_of_prism_element> username: <username_for_prism_element> password: <password_for_prism_element>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --cloud=nutanix --to=<path_to_directory_with_list_of_credentials_requests>/credrequests 1",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: annotations: include.release.openshift.io/self-managed-high-availability: \"true\" labels: controller-tools.k8s.io: \"1.0\" name: openshift-machine-api-nutanix namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: NutanixProviderSpec secretRef: name: nutanix-credentials namespace: openshift-machine-api",
"0000_30_machine-api-operator_00_credentials-request.yaml 1",
"ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1",
"openshift-install create manifests --dir <installation_directory> 1",
"cp <ccoctl_output_dir>/manifests/*credentials.yaml ./<installation_directory>/manifests",
"ls ./<installation_directory>/manifests",
"total 64 -rw-r----- 1 <user> <user> 2335 Jul 8 12:22 cluster-config.yaml -rw-r----- 1 <user> <user> 161 Jul 8 12:22 cluster-dns-02-config.yml -rw-r----- 1 <user> <user> 864 Jul 8 12:22 cluster-infrastructure-02-config.yml -rw-r----- 1 <user> <user> 191 Jul 8 12:22 cluster-ingress-02-config.yml -rw-r----- 1 <user> <user> 9607 Jul 8 12:22 cluster-network-01-crd.yml -rw-r----- 1 <user> <user> 272 Jul 8 12:22 cluster-network-02-config.yml -rw-r----- 1 <user> <user> 142 Jul 8 12:22 cluster-proxy-01-config.yaml -rw-r----- 1 <user> <user> 171 Jul 8 12:22 cluster-scheduler-02-config.yml -rw-r----- 1 <user> <user> 200 Jul 8 12:22 cvo-overrides.yaml -rw-r----- 1 <user> <user> 118 Jul 8 12:22 kube-cloud-config.yaml -rw-r----- 1 <user> <user> 1304 Jul 8 12:22 kube-system-configmap-root-ca.yaml -rw-r----- 1 <user> <user> 4090 Jul 8 12:22 machine-config-server-tls-secret.yaml -rw-r----- 1 <user> <user> 3961 Jul 8 12:22 openshift-config-secret-pull-secret.yaml -rw------- 1 <user> <user> 283 Jul 8 12:24 openshift-machine-api-nutanix-credentials-credentials.yaml",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc apply -f ./oc-mirror-workspace/results-<id>/",
"oc get imagecontentsourcepolicy",
"oc get catalogsource --all-namespaces",
"./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/installing_on_nutanix/index |
17.2. Establishing an Ethernet Connection | 17.2. Establishing an Ethernet Connection To establish an Ethernet connection, you need a network interface card (NIC), a network cable (usually a CAT5 cable), and a network to connect to. Different networks are configured to use different network speeds; make sure your NIC is compatible with the network to which you want to connect. To add an Ethernet connection, follow these steps: Click the Devices tab. Click the New button on the toolbar. Select Ethernet connection from the Device Type list, and click Forward . If you have already added the network interface card to the hardware list, select it from the Ethernet card list. Otherwise, select Other Ethernet Card to add the hardware device. Note The installation program detects supported Ethernet devices and prompts you to configure them. If you configured any Ethernet devices during the installation, they are displayed in the hardware list on the Hardware tab. If you selected Other Ethernet Card , the Select Ethernet Adapter window appears. Select the manufacturer and model of the Ethernet card. Select the device name. If this is the system's first Ethernet card, select eth0 as the device name; if this is the second Ethernet card, select eth1 (and so on). The Network Administration Tool also allows you to configure the resources for the NIC. Click Forward to continue. In the Configure Network Settings window shown in Figure 17.2, "Ethernet Settings" , choose between DHCP and a static IP address. If the device receives a different IP address each time the network is started, do not specify a hostname. Click Forward to continue. Click Apply on the Create Ethernet Device page. Figure 17.2. Ethernet Settings After configuring the Ethernet device, it appears in the device list as shown in Figure 17.3, "Ethernet Device" . Figure 17.3. Ethernet Device After adding the Ethernet device, you can edit its configuration by selecting the device from the device list and clicking Edit . For example, when the device is added, it is configured to start at boot time by default. To change this setting, select to edit the device, modify the Activate device when computer starts value, and save the changes. If you associate more than one device with an Ethernet card, the subsequent devices are device aliases . A device alias allows you to setup multiple virtual devices for one physical device, thus giving the one physical device more than one IP address. For example, you can configure an eth1 device and an eth1:1 device. For details, refer to Section 17.11, "Device Aliases" . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/s1-network-config-ethernet |
Installing on any platform | Installing on any platform OpenShift Container Platform 4.18 Installing OpenShift Container Platform on any platform Red Hat OpenShift Documentation Team | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.18-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.18-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.18-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.18-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.18-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.18-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.18/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.18 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/installing_on_any_platform/index |
Schedule and quota APIs | Schedule and quota APIs OpenShift Container Platform 4.13 Reference guide for schedule and quota APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/schedule_and_quota_apis/index |
Chapter 7. Adding Storage for Red Hat Virtualization | Chapter 7. Adding Storage for Red Hat Virtualization Add storage as data domains in the new environment. A Red Hat Virtualization environment must have at least one data domain, but adding more is recommended. Add the storage you prepared earlier: NFS iSCSI Fibre Channel (FCP) Red Hat Gluster Storage Important If you are using iSCSI storage, new data domains must not use the same iSCSI target as the self-hosted engine storage domain. Warning Creating additional data domains in the same data center as the self-hosted engine storage domain is highly recommended. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you will not be able to add new storage domains or remove the corrupted storage domain; you will have to redeploy the self-hosted engine. 7.1. Adding NFS Storage This procedure shows you how to attach existing NFS storage to your Red Hat Virtualization environment as a data domain. If you require an ISO or export domain, use this procedure, but select ISO or Export from the Domain Function list. Procedure In the Administration Portal, click Storage Domains . Click New Domain . Enter a Name for the storage domain. Accept the default values for the Data Center , Domain Function , Storage Type , Format , and Host lists. Enter the Export Path to be used for the storage domain. The export path should be in the format of 123.123.0.10:/data (for IPv4), [2001:0:0:0:0:0:0:5db1]:/data (for IPv6), or domain.example.com:/data . Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Click OK . The new NFS data domain has a status of Locked until the disk is prepared. The data domain is then automatically attached to the data center. 7.2. Adding iSCSI Storage This procedure shows you how to attach existing iSCSI storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the new storage domain. Select a Data Center from the drop-down list. Select Data as the Domain Function and iSCSI as the Storage Type . Select an active host as the Host . Important Communication to the storage domain is from the selected host and not directly from the Manager. Therefore, all hosts must have access to the storage device before the storage domain can be configured. The Manager can map iSCSI targets to LUNs or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when the iSCSI storage type is selected. If the target that you are using to add storage does not appear, you can use target discovery to find it; otherwise proceed to the step. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment. Note LUNs used externally for the environment are also displayed. You can use the Discover Targets options to add LUNs on many targets or multiple paths to the same LUNs. Important If you use the REST API method discoveriscsi to discover the iscsi targets, you can use an FQDN or an IP address, but you must use the iscsi details from the discovered targets results to log in using the REST API method iscsilogin . See discoveriscsi in the REST API Guide for more information. Enter the FQDN or IP address of the iSCSI host in the Address field. Enter the port with which to connect to the host when browsing for targets in the Port field. The default is 3260 . If CHAP is used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password . Note You can define credentials for an iSCSI target for a specific host with the REST API. See StorageServerConnectionExtensions: add in the REST API Guide for more information. Click Discover . Select one or more targets from the discovery results and click Login for one target or Login All for multiple targets. Important If more than one path access is required, you must discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported. Important When using the REST API iscsilogin method to log in, you must use the iscsi details from the discovered targets results in the discoveriscsi method. See iscsilogin in the REST API Guide for more information. Click the + button to the desired target. This expands the entry and displays all unused LUNs attached to the target. Select the check box for each LUN that you are using to create the storage domain. Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . If you have configured multiple storage connection paths to the same target, follow the procedure in Configuring iSCSI Multipathing to complete iSCSI bonding. If you want to migrate your current storage network to an iSCSI bond, see Migrating a Logical Network to an iSCSI Bond . 7.3. Adding FCP Storage This procedure shows you how to attach existing FCP storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the storage domain. Select an FCP Data Center from the drop-down list. If you do not yet have an appropriate FCP data center, select (none) . Select the Domain Function and the Storage Type from the drop-down lists. The storage domain types that are not compatible with the chosen data center are not available. Select an active host in the Host field. If this is not the first data domain in a data center, you must select the data center's SPM host. Important All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured. The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . The new FCP data domain remains in a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center. 7.4. Adding Red Hat Gluster Storage To use Red Hat Gluster Storage with Red Hat Virtualization, see Configuring Red Hat Virtualization with Red Hat Gluster Storage . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see Red Hat Gluster Storage Version Compatibility and Support . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/adding_storage_domains_to_rhv_she_cli_deploy |
Chapter 6. Configuring metrics for the monitoring stack | Chapter 6. Configuring metrics for the monitoring stack As a cluster administrator, you can configure the OpenTelemetry Collector custom resource (CR) to perform the following tasks: Create a Prometheus ServiceMonitor CR for scraping the Collector's pipeline metrics and the enabled Prometheus exporters. Configure the Prometheus receiver to scrape metrics from the in-cluster monitoring stack. 6.1. Configuration for sending metrics to the monitoring stack One of two following custom resources (CR) configures the sending of metrics to the monitoring stack: OpenTelemetry Collector CR Prometheus PodMonitor CR A configured OpenTelemetry Collector CR can create a Prometheus ServiceMonitor CR for scraping the Collector's pipeline metrics and the enabled Prometheus exporters. Example of the OpenTelemetry Collector CR with the Prometheus exporter apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: ":8888" pipelines: metrics: exporters: [prometheus] 1 Configures the Operator to create the Prometheus ServiceMonitor CR to scrape the Collector's internal metrics endpoint and Prometheus exporter metric endpoints. The metrics will be stored in the OpenShift monitoring stack. Alternatively, a manually created Prometheus PodMonitor CR can provide fine control, for example removing duplicated labels added during Prometheus scraping. Example of the PodMonitor CR that configures the monitoring stack to scrape the Collector metrics apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job 1 The name of the OpenTelemetry Collector CR. 2 The name of the internal metrics port for the OpenTelemetry Collector. This port name is always metrics . 3 The name of the Prometheus exporter port for the OpenTelemetry Collector. 6.2. Configuration for receiving metrics from the monitoring stack A configured OpenTelemetry Collector custom resource (CR) can set up the Prometheus receiver to scrape metrics from the in-cluster monitoring stack. Example of the OpenTelemetry Collector CR for scraping metrics from the in-cluster monitoring stack apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: "true" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__="<metric_name>"}' 4 metrics_path: '/federate' static_configs: - targets: - "prometheus-k8s.openshift-monitoring.svc.cluster.local:9091" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug] 1 Assigns the cluster-monitoring-view cluster role to the service account of the OpenTelemetry Collector so that it can access the metrics data. 2 Injects the OpenShift service CA for configuring the TLS in the Prometheus receiver. 3 Configures the Prometheus receiver to scrape the federate endpoint from the in-cluster monitoring stack. 4 Uses the Prometheus query language to select the metrics to be scraped. See the in-cluster monitoring documentation for more details and limitations of the federate endpoint. 5 Configures the debug exporter to print the metrics to the standard output. 6.3. Additional resources Querying metrics by using the federation endpoint for Prometheus | [
"apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector spec: mode: deployment observability: metrics: enableMetrics: true 1 config: exporters: prometheus: endpoint: 0.0.0.0:8889 resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped service: telemetry: metrics: address: \":8888\" pipelines: metrics: exporters: [prometheus]",
"apiVersion: monitoring.coreos.com/v1 kind: PodMonitor metadata: name: otel-collector spec: selector: matchLabels: app.kubernetes.io/name: <cr_name>-collector 1 podMetricsEndpoints: - port: metrics 2 - port: promexporter 3 relabelings: - action: labeldrop regex: pod - action: labeldrop regex: container - action: labeldrop regex: endpoint metricRelabelings: - action: labeldrop regex: instance - action: labeldrop regex: job",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-monitoring-view 1 subjects: - kind: ServiceAccount name: otel-collector namespace: observability --- kind: ConfigMap apiVersion: v1 metadata: name: cabundle namespce: observability annotations: service.beta.openshift.io/inject-cabundle: \"true\" 2 --- apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: volumeMounts: - name: cabundle-volume mountPath: /etc/pki/ca-trust/source/service-ca readOnly: true volumes: - name: cabundle-volume configMap: name: cabundle mode: deployment config: receivers: prometheus: 3 config: scrape_configs: - job_name: 'federate' scrape_interval: 15s scheme: https tls_config: ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token honor_labels: false params: 'match[]': - '{__name__=\"<metric_name>\"}' 4 metrics_path: '/federate' static_configs: - targets: - \"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\" exporters: debug: 5 verbosity: detailed service: pipelines: metrics: receivers: [prometheus] processors: [] exporters: [debug]"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/red_hat_build_of_opentelemetry/otel-configuring-metrics-for-monitoring-stack |
Chapter 21. Maven settings and repositories for Red Hat Process Automation Manager | Chapter 21. Maven settings and repositories for Red Hat Process Automation Manager When you create a Red Hat Process Automation Manager project, Business Central uses the Maven repositories that are configured for Business Central. You can use the Maven global or user settings to direct all Red Hat Process Automation Manager projects to retrieve dependencies from the public Red Hat Process Automation Manager repository by modifying the Maven project object model (POM) file ( pom.xml ). You can also configure Business Central and KIE Server to use an external Maven repository or prepare a Maven mirror for offline use. For more information about Red Hat Process Automation Manager packaging and deployment options, see Packaging and deploying an Red Hat Process Automation Manager project . 21.1. Adding Maven dependencies for Red Hat Process Automation Manager To use the correct Maven dependencies in your Red Hat Process Automation Manager project, add the Red Hat Business Automation bill of materials (BOM) files to the project's pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. For more information about the Red Hat Business Automation BOM, see What is the mapping between Red Hat Process Automation Manager and the Maven library version? . Procedure Declare the Red Hat Business Automation BOM in the pom.xml file: <dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies> Declare dependencies required for your project in the <dependencies> tag. After you import the product BOM into your project, the versions of the user-facing product dependencies are defined so you do not need to specify the <version> sub-element of these <dependency> elements. However, you must use the <dependency> element to declare dependencies which you want to use in your project. For standalone projects that are not authored in Business Central, specify all dependencies required for your projects. In projects that you author in Business Central, the basic decision engine and process engine dependencies are provided automatically by Business Central. For a basic Red Hat Process Automation Manager project, declare the following dependencies, depending on the features that you want to use: Embedded process engine dependencies <!-- Public KIE API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <!-- Core dependencies for process engine --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow-builder</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-bpmn2</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-runtime-manager</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-persistence-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-query-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-audit</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <!-- Dependency needed for default WorkItemHandler implementations. --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-workitems-core</artifactId> </dependency> <!-- Logging dependency. You can use any logging framework compatible with slf4j. --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency> For a Red Hat Process Automation Manager project that uses CDI, you typically declare the following dependencies: CDI-enabled process engine dependencies <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-services-cdi</artifactId> </dependency> For a basic Red Hat Process Automation Manager project, declare the following dependencies: Embedded decision engine dependencies <dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency> To use KIE Server, declare the following dependencies: Client application KIE Server dependencies <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency> To create a remote client for Red Hat Process Automation Manager, declare the following dependency: Client dependency <dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency> When creating a JAR file that includes assets, such as rules and process definitions, specify the packaging type for your Maven project as kjar and use org.kie:kie-maven-plugin to process the kjar packaging type located under the <project> element. In the following example, USD{kie.version} is the Maven library version listed in What is the mapping between Red Hat Process Automation Manager and the Maven library version? : <packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build> 21.2. Configuring an external Maven repository for Business Central and KIE Server You can configure Business Central and KIE Server to use an external Maven repository, such as Nexus or Artifactory, instead of the built-in repository. This enables Business Central and KIE Server to access and download artifacts that are maintained in the external Maven repository. Important Artifacts in the repository do not receive automated security patches because Maven requires that artifacts be immutable. As a result, artifacts that are missing patches for known security flaws will remain in the repository to avoid breaking builds that depend on them. The version numbers of patched artifacts are incremented. For more information, see JBoss Enterprise Maven Repository . Note For information about configuring an external Maven repository for an authoring environment on Red Hat OpenShift Container Platform, see the following documents: Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 3 using templates Prerequisites Business Central and KIE Server are installed. For installation options, see Planning a Red Hat Process Automation Manager installation . Procedure Create a Maven settings.xml file with connection and access details for your external repository. For details about the settings.xml file, see the Maven Settings Reference . Save the file in a known location, for example, /opt/custom-config/settings.xml . In your Red Hat Process Automation Manager installation directory, navigate to the standalone-full.xml file. For example, if you use a Red Hat JBoss EAP installation for Red Hat Process Automation Manager go to USDEAP_HOME/standalone/configuration/standalone-full.xml . Open standalone-full.xml and under the <system-properties> tag, set the kie.maven.settings.custom property to the full path name of the settings.xml file. For example: <property name="kie.maven.settings.custom" value="/opt/custom-config/settings.xml"/> Start or restart Business Central and KIE Server. steps For each Business Central project that you want to export or push as a KJAR artifact to the external Maven repository, you must add the repository information in the project pom.xml file. For instructions, see Packaging and deploying an Red Hat Process Automation Manager project . 21.3. Preparing a Maven mirror repository for offline use If your Red Hat Process Automation Manager deployment does not have outgoing access to the public Internet, you must prepare a Maven repository with a mirror of all the necessary artifacts and make this repository available to your environment. Note You do not need to complete this procedure if your Red Hat Process Automation Manager deployment is connected to the Internet. Prerequisites A computer that has outgoing access to the public Internet is available. Procedure On the computer that has an outgoing connection to the public Internet, complete the following steps: Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download and extract the Red Hat Process Automation Manager 7.13.5 Offliner Content List ( rhpam-7.13.5-offliner.zip ) product deliverable file. Extract the contents of the rhpam-7.13.5-offliner.zip file into any directory. Change to the directory and enter the following command: This command creates the repository subdirectory and downloads the necessary artifacts into this subdirectory. This is the mirror repository. If a message reports that some downloads have failed, run the same command again. If downloads fail again, contact Red Hat support. If you developed services outside of Business Central and they have additional dependencies, add the dependencies to the mirror repository. If you developed the services as Maven projects, you can use the following steps to prepare these dependencies automatically. Complete the steps on the computer that has an outgoing connection to the public Internet. Create a backup of the local Maven cache directory ( ~/.m2/repository ) and then clear the directory. Build the source of your projects using the mvn clean install command. For every project, enter the following command to ensure that Maven downloads all runtime dependencies for all the artifacts generated by the project: Replace /path/to/project/pom.xml with the path of the pom.xml file of the project. Copy the contents of the local Maven cache directory ( ~/.m2/repository ) to the repository subdirectory that was created. Copy the contents of the repository subdirectory to a directory on the computer on which you deployed Red Hat Process Automation Manager. This directory becomes the offline Maven mirror repository. Create and configure a settings.xml file for your Red Hat Process Automation Manager deployment as described in Section 21.2, "Configuring an external Maven repository for Business Central and KIE Server" . Make the following changes in the settings.xml file: Under the <profile> tag, if a <repositories> or <pluginRepositores> tag is missing, add the missing tags. Under <repositories> add the following content: <repository> <id>offline-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> Replace /path/to/repo with the full path to the local Maven mirror repository directory. Under <pluginRepositories> add the following content: <repository> <id>offline-plugin-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> Replace /path/to/repo with the full path to the local Maven mirror repository directory. Set the kie.maven.offline.force property for Business Central to true . For instructions about setting properties for Business Central, see Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . | [
"<dependencyManagement> <dependencies> <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- Your dependencies --> </dependencies>",
"<!-- Public KIE API --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <!-- Core dependencies for process engine --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-flow-builder</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-bpmn2</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-runtime-manager</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-persistence-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-query-jpa</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-audit</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <!-- Dependency needed for default WorkItemHandler implementations. --> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-workitems-core</artifactId> </dependency> <!-- Logging dependency. You can use any logging framework compatible with slf4j. --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency>",
"<dependency> <groupId>org.kie</groupId> <artifactId>kie-api</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-kie-services</artifactId> </dependency> <dependency> <groupId>org.jbpm</groupId> <artifactId>jbpm-services-cdi</artifactId> </dependency>",
"<dependency> <groupId>org.drools</groupId> <artifactId>drools-compiler</artifactId> </dependency> <!-- Dependency for persistence support. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-persistence-jpa</artifactId> </dependency> <!-- Dependencies for decision tables, templates, and scorecards. For other assets, declare org.drools:business-central-models-* dependencies. --> <dependency> <groupId>org.drools</groupId> <artifactId>drools-decisiontables</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-templates</artifactId> </dependency> <dependency> <groupId>org.drools</groupId> <artifactId>drools-scorecards</artifactId> </dependency> <!-- Dependency for loading KJARs from a Maven repository using KieScanner. --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> </dependency>",
"<dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> </dependency>",
"<dependency> <groupId>org.uberfire</groupId> <artifactId>uberfire-rest-client</artifactId> </dependency>",
"<packaging>kjar</packaging> <build> <plugins> <plugin> <groupId>org.kie</groupId> <artifactId>kie-maven-plugin</artifactId> <version>USD{kie.version}</version> <extensions>true</extensions> </plugin> </plugins> </build>",
"<property name=\"kie.maven.settings.custom\" value=\"/opt/custom-config/settings.xml\"/>",
"./offline-repo-builder.sh offliner.txt",
"mvn -e -DskipTests dependency:go-offline -f /path/to/project/pom.xml --batch-mode -Djava.net.preferIPv4Stack=true",
"<repository> <id>offline-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository>",
"<repository> <id>offline-plugin-repository</id> <url>file:///path/to/repo</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository>"
] | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/maven-repo-using-con_install-on-eap |
Chapter 9. Networking | Chapter 9. Networking 9.1. NetworkManager 9.1.1. Legacy network scripts support Network scripts are deprecated in Red Hat Enterprise Linux 8 and are no longer provided by default. The basic installation provides a new version of the ifup and ifdown scripts which call NetworkManager through the nmcli tool. In Red Hat Enterprise Linux 8, to run the ifup and the ifdown scripts, NetworkManager must be running. Note Custom commands in /sbin/ifup-local , ifdown-pre-local and ifdown-local scripts are not executed. If any of these scripts are required, the installation of the deprecated network scripts in the system is still possible with the following command: The ifup and the ifdown scripts link to the installed legacy network scripts. Calling the legacy network scripts shows a warning about their deprecation. 9.1.2. NetworkManager supports SR-IOV virtual functions In Red Hat Enterprise Linux 8, NetworkManager allows configuring the number of virtual functions (VF) for interfaces that support single-root I/O virtualization (SR-IOV). Additionally, NetworkManager allows configuring some attributes of the VFs, such as the MAC address, VLAN, the spoof checking setting and allowed bitrates. Note that all properties related to SR-IOV are available in the sriov connection setting. For more details, see the nm-settings(5) man page on your system. 9.1.3. NetworkManager supports a wildcard interface name match for connections Previously, it was possible to restrict a connection to a given interface using only an exact match on the interface name. With this update, connections have a new match.interface-name property which supports wildcards. This update enables users to choose the interface for a connection in a more flexible way using a wildcard pattern. 9.1.4. NetworkManager supports configuring ethtool offload features With this enhancement, NetworkManager supports configuring ethtool offload features, and users no longer need to use init scripts or a NetworkManager dispatcher script. As a result, users can now configure the offload feature as a part of the connection profile using one of the following methods: By using the nmcli utility By editing keyfiles in the /etc/NetworkManager/system-connections/ directory By editing the /etc/sysconfig/network-scripts/ifcfg-* files Note that this feature is currently not supported in graphical interfaces and in the nmtui utility. For further details, see Configuring an ethtool offload feature by using nmcli . 9.1.5. NetworkManager now uses the internal DHCP plug-in by default NetworkManager supports the internal and dhclient DHCP plug-ins. By default, NetworkManager in Red Hat Enterprise Linux (RHEL) 7 uses the dhclient and RHEL 8 the internal plug-in. In certain situations, the plug-ins behave differently. For example, dhclient can use additional settings specified in the /etc/dhcp/ directory. If you upgrade from RHEL 7 to RHEL 8 and NetworkManager behaves differently, add the following setting to the [main] section in the /etc/NetworkManager/NetworkManager.conf file to use the dhclient plug-in: 9.1.6. The NetworkManager-config-server package is not installed by default in RHEL 8 The NetworkManager-config-server package is only installed by default if you select either the Server or Server with GUI base environment during the setup. If you selected a different environment, use the yum install NetworkManager-config-server command to install the package. 9.2. Packet filtering 9.2.1. nftables replaces iptables as the default network packet filtering framework The nftables framework provides packet classification facilities and it is the designated successor to the iptables , ip6tables , arptables , ebtables , and ipset tools. It offers numerous improvements in convenience, features, and performance over packet-filtering tools, most notably: lookup tables instead of linear processing a single framework for both the IPv4 and IPv6 protocols rules all applied atomically instead of fetching, updating, and storing a complete rule set support for debugging and tracing in the rule set ( nftrace ) and monitoring trace events (in the nft tool) more consistent and compact syntax, no protocol-specific extensions a Netlink API for third-party applications Similarly to iptables , nftables use tables for storing chains. The chains contain individual rules for performing actions. The nft tool replaces all tools from the packet-filtering frameworks. The libnftables library can be used for low-level interaction with nftables Netlink API over the libmnl library. The iptables , ip6tables , ebtables and arptables tools are replaced by nftables-based drop-in replacements with the same name. While external behavior is identical to their legacy counterparts, internally they use nftables with legacy netfilter kernel modules through a compatibility interface where required. Effect of the modules on the nftables rule set can be observed using the nft list ruleset command. Since these tools add tables, chains, and rules to the nftables rule set, be aware that nftables rule-set operations, such as the nft flush ruleset command, might affect rule sets installed using the formerly separate legacy commands. To quickly identify which variant of the tool is present, version information has been updated to include the back-end name. In RHEL 8, the nftables-based iptables tool prints the following version string: For comparison, the following version information is printed if legacy iptables tool is present: 9.2.2. Arptables FORWARD is removed from filter tables in RHEL 8 The arptables FORWARD chain functionality has been removed in Red Hat Enterprise Linux (RHEL) 8. You can now use the FORWARD chain of the ebtables tool adding the rules into it. 9.2.3. Output of iptables-ebtables is not 100% compatible with ebtables In RHEL 8, the ebtables command is provided by the iptables-ebtables package, which contains an nftables -based reimplementation of the tool. This tool has a different code base, and its output deviates in aspects, which are either negligible or deliberate design choices. Consequently, when migrating your scripts parsing some ebtables output, adjust the scripts to reflect the following: MAC address formatting has been changed to be fixed in length. Where necessary, individual byte values contain a leading zero to maintain the format of two characters per octet. Formatting of IPv6 prefixes has been changed to conform with RFC 4291. The trailing part after the slash character no longer contains a netmask in the IPv6 address format but a prefix length. This change applies to valid (left-contiguous) masks only, while others are still printed in the old formatting. 9.2.4. New tools to convert iptables to nftables This update adds the iptables-translate and ip6tables-translate tools to convert the existing iptables or ip6tables rules into the equivalent ones for nftables . Note that some extensions lack translation support. If such an extension exists, the tool prints the untranslated rule prefixed with the # sign. For example: Additionally, users can use the iptables-restore-translate and ip6tables-restore-translate tools to translate a dump of rules. Note that before that, users can use the iptables-save or ip6tables-save commands to print a dump of current rules. For example: 9.3. Changes in wpa_supplicant 9.3.1. journalctl can now read the wpa_supplicant log In Red Hat Enterprise Linux (RHEL) 8, the wpa_supplicant package is built with CONFIG_DEBUG_SYSLOG enabled. This allows reading the wpa_supplicant log using the journalctl utility instead of checking the contents of the /var/log/wpa_supplicant.log file. 9.3.2. The compile-time support for wireless extensions in wpa_supplicant is disabled The wpa_supplicant package does not support wireless extensions. When a user is trying to use wext as a command-line argument, or trying to use it on old adapters which only support wireless extensions, will not be able to run the wpa_supplicant daemon. 9.4. A new data chunk type, I-DATA , added to SCTP This update adds a new data chunk type, I-DATA , and stream schedulers to the Stream Control Transmission Protocol (SCTP). Previously, SCTP sent user messages in the same order as they were sent by a user. Consequently, a large SCTP user message blocked all other messages in any stream until completely sent. When using I-DATA chunks, the Transmission Sequence Number (TSN) field is not overloaded. As a result, SCTP now can schedule the streams in different ways, and I-DATA allows user messages interleaving (RFC 8260). Note that both peers must support the I-DATA chunk type. 9.5. Notable TCP features in RHEL 8 Red Hat Enterprise Linux 8 is distributed with TCP networking stack version 4.18, which provides higher performances, better scalability, and more stability. Performances are boosted especially for busy TCP server with a high ingress connection rate. Additionally, two new TCP congestion algorithms, BBR and NV , are available, offering lower latency, and better throughput than cubic in most scenarios. 9.5.1. TCP BBR support in RHEL 8 A new TCP congestion control algorithm, Bottleneck Bandwidth and Round-trip time (BBR) is now supported in Red Hat Enterprise Linux (RHEL) 8. BBR attempts to determine the bandwidth of the bottleneck link and the Round-trip time (RTT). Most congestion algorithms are based on packet loss, including CUBIC, the default Linux TCP congestion control algorithm, which have problems on high-throughput links. BBR does not react to loss events directly, it adjusts the TCP pacing rate to match it with the available bandwidth. For more information about this, see the Red Hat Knowledgebase solution How to configure TCP BBR congestion control algorithm . 9.6. VLAN-related changes 9.6.1. IPVLAN virtual network drivers are now supported In Red Hat Enterprise Linux 8.0, the kernel includes support for IPVLAN virtual network drivers. With this update, IPVLAN virtual Network Interface Cards (NICs) enable the network connectivity for multiple containers exposing a single MAC address to the local network. This allows a single host to have a lot of containers overcoming the possible limitation on the number of MAC addresses supported by the peer networking equipment. 9.6.2. Certain network adapters require a firmware update to fully support 802.1ad The firmware of certain network adapters does not fully support the 802.1ad standard, which is also called Q-in-Q or stacked virtual local area networks (VLANs). Contact your hardware vendor on details how to verify that your network adapter uses a firmware that supports the 802.1ad standard and how to update the firmware. As a result, with the correct firmware, configuring stacked VLANs on RHEL 8.0 work as expected. 9.7. Network interface name changes In Red Hat Enterprise Linux 8, the same consistent network device naming scheme is used by default as in RHEL 7. However, certain kernel drivers, such as e1000e , nfp , qede , sfc , tg3 and bnxt_en changed their consistent name on a fresh installation of RHEL 8. However, the names are preserved on upgrade from RHEL 7. 9.8. The ipv6 , netmask , gateway , and hostname kernel parameters have been removed The ipv6 , netmask , gateway , and hostname kernel parameters to configure the network in the kernel command line are no longer available since RHEL 8.3. Instead, use the consolidated ip parameter that accepts different formats, such as the following: For further details about the individual fields and other formats this parameter accepts, see the description of the ip parameter in the dracut.cmdline(7) man page on your system. 9.9. The -ok option of the tc command removed The -ok option of the tc command has been removed in Red Hat Enterprise Linux 8. As a workaround, users can implement code to communicate directly via netlink with the kernel. Response messages received, indicate completion and status of sent requests. An alternative way for less time-critical applications is to call tc for each command separately. This may happen with a custom script which simulates the tc -batch behavior by printing OK for each successful tc invocation. 9.10. The PTP capabilities output format of the ethtool utility has changed Starting with RHEL 8.4, the ethtool utility uses the netlink interface instead of the ioctl() system call to communicate with the kernel. Consequently, when you use the ethtool -T <network_controller> command, the format of Precision Time Protocol (PTP) values changes. Previously, with the ioctl() interface, ethtool translated the capability bit names by using an ethtool -internal string table and, the ethtool -T <network_controller> command displayed, for example: With the netlink interface, ethtool receives the strings from the kernel. These strings do not include the internal SOF_TIMESTAMPING_* names. Therefore, ethtool -T <network_controller> now displays, for example: If you use the PTP capabilities output of ethtool in scripts or applications, update them accordingly. | [
"# yum install network-scripts",
"[main] dhcp=dhclient",
"iptables --version iptables v1.8.0 (nf_tables)",
"iptables --version iptables v1.8.0 (legacy)",
"| % iptables-translate -A INPUT -j CHECKSUM --checksum-fill | nft # -A INPUT -j CHECKSUM --checksum-fill",
"| % sudo iptables-save >/tmp/iptables.dump | % iptables-restore-translate -f /tmp/iptables.dump | # Translated by iptables-restore-translate v1.8.0 on Wed Oct 17 17:00:13 2018 | add table ip nat |",
"ip= IP_address : peer : gateway_IP_address : net_mask : host_name : interface_name : configuration_method",
"Time stamping parameters for <network_controller> : Capabilities: hardware-transmit (SOF_TIMESTAMPING_TX_HARDWARE) software-transmit (SOF_TIMESTAMPING_TX_SOFTWARE)",
"Time stamping parameters for <network_controller> : Capabilities: hardware-transmit software-transmit"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/networking_considerations-in-adopting-rhel-8 |
6.6. Networking | 6.6. Networking 389-ds-base component, BZ# 1008013 Under certain conditions, when the server is processing multiple outgoing replication or windows sync agreements using the TLS or SSL protocol, and processing incoming client requests that use TLS or SSL, and incoming BIND requests where the password used is hashed using SSHA512, the server becomes unresponsive to new incoming client requests. A restart of the dirsrv service is required. As the server is unresponsive, restarting can require terminating the ns-slapd process by running the kill -9 command. kernel component In cluster environment, the multicast traffic from the guest to a host can be unreliable. To work around this problem, enable multicast_querier for the bridge. The setting is located in the /sys/class/net/<bridge_name>/bridge/multicast_querier file. Note that if the setting is not available, the problem should not occur. kernel component A missing part of the bcma driver causes the brcmsmac driver not to load automatically when the bcma driver scans the for devices. This causes the kernel not to load the brcmsmac module automatically on boot. Symptoms can be confirmed by running the lspci -v command for the device and noting the driver to be bmca , not brcmsmac . To load the driver manually, run modprobe brcmsmac on the command line. 389-ds-base component Under certain conditions, when the server is processing multiple outgoing replication or windows sync agreements using the TLS or SSL protocol, and processing incoming client requests that use TLS or SSL and Simple Paged Results, the server becomes unresponsive to new incoming client requests. The dirsrv service will stop responding to new incoming client requests. A restart of the dirsrv service is required to restore service. kernel component, BZ# 1003475 When some Fibre Channel over Ethernet (FCoE) switch ports connected to the bfa host bus adapter go offline and then return in the online state, the bfa port may not re-establish the connection with the switch. This is due to a failure of the bfa driver's retry logic when interacting with certain switches. To work around this problem, reset the bfa link. This can be done either by running: or by running: anaconda component, BZ# 984129 For HP systems running in HP FlexFabric mode, the designated iSCSI function can only be used for iSCSI offload related operations and will not be able to perform any other Layer 2 networking tasks, for example, DHCP. In the case of iSCSI boot from SAN, the same SAN MAC address is exposed to both the corresponding ifconfig record and the iSCSI Boot Firmware Table (iBFT), therefore, Anaconda will skip the network selection prompt and will attempt to acquire the IP address as specified by iBFT. If DHCP is desired, Anaconda will attempt to acquire DHCP using this iSCSI function, which will fail and Anaconda will then try to acquire DHCP indefinitely. To work around this problem, if DHCP is desired, the user must use the asknetwork installation parameter and provide a "dummy" static IP address to the corresponding network interface of the iSCSI function. This prevents Anaconda from entering an infinite loop and allows it to request the iSCSI offload function to perform DHCP acquisition instead. iscsi-initiator-utils component, BZ# 825185 If the corresponding network interface has not been brought up by dracut or the tools from the iscsi-initiator-utils package, this prevents the correct MAC address from matching the offload interface, and host bus adapter (HBA) mode will not work without manual intervention to bring the corresponding network interface up. To work around this problem, the user must select the corresponding Layer 2 network interface when anaconda prompts the user to choose "which network interface to install through". This will inherently bring up the offload interface for the installation. kernel component When an igb link us up, the following ethtool fields display incorrect values as follows: Supported ports: [ ] - for example, an empty bracket can be displayed. Supported pause frame use: No - however, pause frame is supported. Supports auto-negotiation: No - auto-negotiation is supported. Advertised pause frame use: No - advertised pause frame is turned on. Advertised auto-negotiation: No - advertised auto-negotiation is turned on. Speed: Unknown! - the speed is known and can be verified using the dmesg tool. linuxptp component End-to-End (E2E) slaves that communicated with an E2E master once can synchronize to Peer-to-Peer (P2P) masters and vice versa. The slaves cannot update their path delay value because E2E ports reject peer delay requests from P2P ports. However, E2E ports accept SYNC messages from P2P ports and the slaves keep updating clock frequency based on undesired offset values that are calculated by using the old path delay value. Therefore, a time gap will occur if the master port is started with an incorrect delay mechanism. The "delay request on P2P" or "pdelay_req on E2E port" message can appear. To work around these problems, use a single delay mechanism for one PTP communication path. Also, because E2E and P2P mismatch can trigger a time gap of slave clock, pay attention to the configuration when starting or restarting a node on a running domain. samba4 component, BZ# 878168 If configured, the Active Directory (AD) DNS server returns IPv4 and IPv6 addresses of an AD server. If the FreeIPA server cannot connect to the AD server with an IPv6 address, running the ipa trust-add command will fail even if it would be possible to use IPv4. To work around this problem, add the IPv4 address of the AD server to the /etc/hosts file. In this case, the FreeIPA server will use only the IPv4 address and executing ipa trust-add will be successful. kernel component Destroying the root port before any NPIV ports can cause unexpected system behavior, including a full system crash. Note that one instance where the root port is destroyed before the NPIV ports is when the system is shut down. To work around this problem, destroy NPIV ports before destroying the root port that the NPIV ports were created on. This means that for each created NPIV port, the user should write to the sysfs vport_delete interface to delete that NPIV port. This should be done before the root port is destroyed. Users are advised to script the NPIV port deletion and configure the system such that the script is executed before the fcoe service is stopped, in the shutdown sequence. kernel component A Linux LIO FCoE target causes the bfa driver to reset all FCoE targets which might lead to data corruption on LUN. To avoid these problems, do not use the bfa driver with a Linux FCoE target. kernel component Typically, on platforms with no Intelligent Platform Management Interface (IPMI) hardware the user can see the following message the on the boot console and in dmesg log: This message can be safely ignored, unless the system really does have IPMI hardware. In that case, the message indicates that the IPMI hardware could not be initialized. In order to support Advanced Configuration and Power Interface (ACPI) opregion access to IPMI functionality early in the boot, the IPMI driver has been statically linked with the kernel image. This means that the IPMI driver is "loaded" whether or not there is any hardware. The IPMI driver will try to initialize the IPMI hardware, but if there is no IPMI hardware present on the booting platform, the driver will print error messages on the console and in the dmesg log. Some of these error messages do not identify themselves as having been issued by the IPMI driver, so they can appear to be serious, when they are harmless. fcoe-utils component After an ixgbe Fibre Channel over Ethernet (FCoE) session is created, server reboot can cause some or all of the FCoE sessions to not be created automatically. To work around this problem, follow the following steps (assuming that eth0 is the missing NIC for the FCoE session): libibverbs component The InfiniBand UD transport test utility could become unresponsive when the ibv_ud_pingpong command was used with a packet size of 2048 or greater. UD is limited to no more than the smallest MTU of any point in the path between point A and B, which is between 0 and 4096 given that the largest MTU supported (but not the smallest nor required) is 4096. If the underlying Ethernet is jumbo frame capable, and with a 4096 IB MTU on an RoCE device, the max packet size that can be used with UD is 4012 bytes. bind-dyndb-ldap component IPA creates a new DNS zone in two separate steps. When the new zone is created, it is invalid for a short period of time. A/AAAA records for the name server belonging to the new zone are created after this delay. Sometimes, BIND attempts to load this invalid zone and fails. In such a case, reload BIND by running either rndc reload or service named restart . bind-dyndb-ldap component, BZ# 1142176 The bind-dyndb-ldap library incorrectly compares current time and the expiration time of the Kerberos ticket used for authentication to an LDAP server. As a consequence, the Kerberos ticket is not renewed under certain circumstances, which causes the connection to the LDAP server to fail. The connection failure often happens after a BIND service reload is triggered by the logrotate utility, and you need to run the pkill -9 named command to terminate BIND after a deadlock occurs. To work around this problem, set the validity period of the Kerberos ticket to be at least 10 minutes shorter than the logrotate period. bind-dyndb-ldap component, BZ# 1142152 The BIND service incorrectly handles errors returned by dynamic databases (from dyndb API). As a consequence, BIND enters a deadlock situation on shutdown under certain circumstances. No workaround is available at the moment. If the deadlock occurs, terminate BIND by running the pkill -9 named command and restart the service manually. kernel component The latest version of the sfc NIC driver causes lower UDP and TX performance with large amounts of fragmented UDP packets. This problem can be avoided by setting a constant interrupt moderation period (not adaptive moderation) on both sides, sending and receiving. kernel component Some network interface cards (NICs) might fail to get an IPv4 address assigned after the system is booted. The default time to wait for the link to come up is 5 seconds. To work around this issue, increase this wait time by specifying the LINKDELAY directive in the interface configuration file. For example, add the following line to the /etc/sysconfig/network-scripts/ifcfg- <interface> file: In addition, check STP settings on all network switches in the path of the DHCP server as the default STP forward delay is 15 seconds. samba component Current Samba versions shipped with Red Hat Enterprise Linux 6 are not able to fully control the user and group database when using the ldapsam_compat back end. This back end was never designed to run a production LDAP and Samba environment for a long period of time. The ldapsam_compat back end was created as a tool to ease migration from historical Samba releases (version 2.2.x) to Samba version 3 and greater using the new ldapsam back end and the new LDAP schema. The ldapsam_compat back end lack various important LDAP attributes and object classes in order to fully provide full user and group management. In particular, it cannot allocate user and group IDs. In the Red Hat Enterprise Linux Reference Guide , it is pointed out that this back end is likely to be deprecated in future releases. Refer to Samba's documentation for instructions on how to migrate existing setups to the new LDAP schema. When you are not able to upgrade to the new LDAP schema (though upgrading is strongly recommended and is the preferred solution), you may work around this issue by keeping a dedicated machine running an older version of Samba (v2.2.x) for the purpose of user account management. Alternatively, you can create user accounts with standard LDIF files. The important part is the assignment of user and group IDs. In that case, the old Samba 2.2 algorithmic mapping from Windows RIDs to Unix IDs is the following: user RID = UID * 2 + 1000 , while for groups it is: group RID = GID * 2 + 1001 . With these workarounds, users can continue using the ldapsam_compat back end with their existing LDAP setup even when all the above restrictions apply. kernel component Because Red Hat Enterprise Linux 6 defaults to using Strict Reverse Path filtering, packets are dropped by default when the route for outbound traffic differs from the route of incoming traffic. This is in line with current recommended practice in RFC3704. For more information about this issue please refer to /usr/share/doc/kernel-doc- <version> /Documentation/networking/ip-sysctl.txt and https://access.redhat.com/site/solutions/53031 . | [
"]# echo 1 > /sys/class/fc_host/host/issue_lip",
"]# modprobe -r bfa && modprobe bfa",
"Could not set up I/O space",
"ifconfig eth0 down ifconfig eth0 up sleep 5 dcbtool sc eth0 dcb on sleep 5 dcbtool sc eth0 pfc e:1 a:1 w:1 dcbtool sc eth0 app:fcoe e:1 a:1 w:1 service fcoe restart",
"LINKDELAY=10"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/networking_issues |
Securing OpenShift Pipelines | Securing OpenShift Pipelines Red Hat OpenShift Pipelines 1.15 Security features of OpenShift Pipelines Red Hat OpenShift Documentation Team | [
"oc edit TektonConfig config",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: addon: {} chain: artifacts.taskrun.format: tekton config: {}",
"chains.tekton.dev/transparency-upload: \"true\"",
"cosign generate-key-pair k8s://openshift-pipelines/signing-secrets",
"skopeo generate-sigstore-key --output-prefix <mykey> 1",
"base64 -w 0 <mykey>.pub > b64.pub",
"base64 -w 0 <mykey>.private > b64.private",
"echo -n '<passphrase>' | base64 -w 0 > b64.passphrase 1",
"oc create secret generic signing-secrets -n openshift-pipelines",
"oc edit secret -n openshift-pipelines signing-secrets",
"apiVersion: v1 data: cosign.key: <Encoded <mykey>.private> 1 cosign.password: <Encoded passphrase> 2 cosign.pub: <Encoded <mykey>.pub> 3 immutable: true kind: Secret metadata: name: signing-secrets type: Opaque",
"Error from server (AlreadyExists): secrets \"signing-secrets\" already exists",
"oc delete secret signing-secrets -n openshift-pipelines",
"export NAMESPACE=<namespace> 1 export SERVICE_ACCOUNT_NAME=<service_account> 2",
"oc create secret registry-credentials --from-file=.dockerconfigjson \\ 1 --type=kubernetes.io/dockerconfigjson -n USDNAMESPACE",
"oc patch serviceaccount USDSERVICE_ACCOUNT_NAME -p \"{\\\"imagePullSecrets\\\": [{\\\"name\\\": \\\"registry-credentials\\\"}]}\" -n USDNAMESPACE",
"oc create serviceaccount <service_account_name>",
"apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-push-task-run-2 spec: taskRunTemplate: serviceAccountName: build-bot 1 taskRef: name: build-push",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: chain: artifacts.oci.storage: \"\" artifacts.taskrun.format: tekton artifacts.taskrun.storage: tekton",
"oc delete po -n openshift-pipelines -l app=tekton-chains-controller",
"oc create -f https://raw.githubusercontent.com/tektoncd/chains/main/examples/taskruns/task-output-image.yaml 1",
"taskrun.tekton.dev/build-push-run-output-image-qbjvh created",
"tkn tr describe --last",
"[...truncated output...] NAME STATUS ∙ create-dir-builtimage-9467f Completed ∙ git-source-sourcerepo-p2sk8 Completed ∙ build-and-push Completed ∙ echo Completed ∙ image-digest-exporter-xlkn7 Completed",
"tkn tr describe --last -o jsonpath=\"{.metadata.annotations.chains\\.tekton\\.dev/signature-taskrun-USDTASKRUN_UID}\" | base64 -d > sig",
"export TASKRUN_UID=USD(tkn tr describe --last -o jsonpath='{.metadata.uid}')",
"cosign verify-blob-attestation --insecure-ignore-tlog --key path/to/cosign.pub --signature sig --type slsaprovenance --check-claims=false /dev/null 1",
"Verified OK",
"cosign generate-key-pair k8s://openshift-pipelines/signing-secrets",
"oc create secret generic <docker_config_secret_name> \\ 1 --from-file <path_to_config.json> 2",
"oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"artifacts.taskrun.format\": \"in-toto\"}}' oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"artifacts.taskrun.storage\": \"oci\"}}' oc patch configmap chains-config -n openshift-pipelines -p='{\"data\":{\"transparency.enabled\": \"true\"}}'",
"oc apply -f examples/kaniko/kaniko.yaml 1",
"export REGISTRY=<url_of_registry> 1 export DOCKERCONFIG_SECRET_NAME=<name_of_the_secret_in_docker_config_json> 2",
"tkn task start --param IMAGE=USDREGISTRY/kaniko-chains --use-param-defaults --workspace name=source,emptyDir=\"\" --workspace name=dockerconfig,secret=USDDOCKERCONFIG_SECRET_NAME kaniko-chains",
"oc get tr <task_run_name> \\ 1 -o json | jq -r .metadata.annotations { \"chains.tekton.dev/signed\": \"true\", }",
"cosign verify --key cosign.pub USDREGISTRY/kaniko-chains cosign verify-attestation --key cosign.pub USDREGISTRY/kaniko-chains",
"rekor-cli search --sha <image_digest> 1 <uuid_1> 2 <uuid_2> 3",
"rekor-cli get --uuid <uuid> --format json | jq -r .Attestation | base64 --decode | jq",
"The format to extract vulnerability summary (adjust the jq command for different JSON structures). jq -rce '{vulnerabilities:{ critical: (.result.summary.CRITICAL), high: (.result.summary.IMPORTANT), medium: (.result.summary.MODERATE), low: (.result.summary.LOW) }}' scan_output.json | tee USD(results.SCAN_OUTPUT.path)",
"apiVersion: tekton.dev/v1 kind: Task metadata: name: vulnerability-scan 1 annotations: task.output.location: results 2 task.results.format: application/json task.results.key: SCAN_OUTPUT 3 spec: results: - description: CVE result format 4 name: SCAN_OUTPUT steps: - name: roxctl 5 image: quay.io/roxctl-tool-image 6 env: - name: ENV_VAR_NAME_1 7 valueFrom: secretKeyRef: key: secret_key_1 name: secret_name_1 env: - name: ENV_VAR_NAME_2 valueFrom: secretKeyRef: key: secret_key_2 name: secret_name_2 script: | 8 #!/bin/sh # Sample shell script echo \"ENV_VAR_NAME_1: \" USDENV_VAR_NAME_1 echo \"ENV_VAR_NAME_2: \" USDENV_VAR_NAME_2 jq --version (adjust the jq command for different JSON structures) curl -k -L -H \"Authorization: Bearer USDENV_VAR_NAME_1\" https://USDENV_VAR_NAME_2/api/cli/download/roxctl-linux --output ./roxctl chmod +x ./roxctl echo \"roxctl version\" ./roxctl version echo \"image from pipeline: \" # Replace the following line with your dynamic image logic DYNAMIC_IMAGE=USD(get_dynamic_image_logic_here) echo \"Dynamic image: USDDYNAMIC_IMAGE\" ./roxctl image scan --insecure-skip-tls-verify -e USDENV_VAR_NAME_2 --image USDDYNAMIC_IMAGE --output json > roxctl_output.json more roxctl_output.json jq -rce \\ 9 '{vulnerabilities:{ critical: (.result.summary.CRITICAL), high: (.result.summary.IMPORTANT), medium: (.result.summary.MODERATE), low: (.result.summary.LOW) }}' scan_output.json | tee USD(results.SCAN_OUTPUT.path)",
"spec: results: - description: The common vulnerabilities and exposures (CVE) result name: SCAN_OUTPUT value: USD(tasks.vulnerability-scan.results.SCAN_OUTPUT)",
"apiVersion: tekton.dev/v1 kind: Task metadata: name: sbom-task 1 annotations: task.output.location: results 2 task.results.format: application/text task.results.key: LINK_TO_SBOM 3 task.results.type: external-link 4 spec: results: - description: Contains the SBOM link 5 name: LINK_TO_SBOM steps: - name: print-sbom-results image: quay.io/image 6 script: | 7 #!/bin/sh syft version syft quay.io/<username>/quarkus-demo:v2 --output cyclonedx-json=sbom-image.json echo 'BEGIN SBOM' cat sbom-image.json echo 'END SBOM' echo 'quay.io/user/workloads/<namespace>/node-express/node-express:build-8e536-1692702836' | tee USD(results.LINK_TO_SBOM.path) 8",
"spec: tasks: - name: sbom-task taskRef: name: sbom-task 1 results: - name: IMAGE_URL 2 description: url value: <oci_image_registry_url> 3",
"cosign download sbom quay.io/<workspace>/user-workload@sha256",
"cosign download sbom quay.io/<workspace>/user-workload@sha256 > sbom.txt",
"{ \"bomFormat\": \"CycloneDX\", \"specVersion\": \"1.4\", \"serialNumber\": \"urn:uuid:89146fc4-342f-496b-9cc9-07a6a1554220\", \"version\": 1, \"metadata\": { }, \"components\": [ { \"bom-ref\": \"pkg:pypi/[email protected]?package-id=d6ad7ed5aac04a8\", \"type\": \"library\", \"author\": \"Armin Ronacher <[email protected]>\", \"name\": \"Flask\", \"version\": \"2.1.0\", \"licenses\": [ { \"license\": { \"id\": \"BSD-3-Clause\" } } ], \"cpe\": \"cpe:2.3:a:armin-ronacher:python-Flask:2.1.0:*:*:*:*:*:*:*\", \"purl\": \"pkg:pypi/[email protected]\", \"properties\": [ { \"name\": \"syft:package:foundBy\", \"value\": \"python-package-cataloger\"",
"apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints allowedCapabilities: - SETFCAP fsGroup: type: MustRunAs",
"oc edit TektonConfig config",
"apiVersion: operator.tekton.dev/v1alpha1 kind: TektonConfig metadata: name: config spec: platforms: openshift: scc: default: \"restricted-v2\" 1 maxAllowed: \"privileged\" 2",
"apiVersion: v1 kind: Namespace metadata: name: test-namespace annotations: operator.tekton.dev/scc: nonroot",
"apiVersion: security.openshift.io/v1 kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: my-scc is a close replica of anyuid scc. pipelines-scc has fsGroup - RunAsAny. name: my-scc allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true allowPrivilegedContainer: false allowedCapabilities: null defaultAddCapabilities: null fsGroup: type: RunAsAny groups: - system:cluster-admins priority: 10 readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD runAsUser: type: RunAsAny seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret",
"oc create -f my-scc.yaml",
"oc create serviceaccount fsgroup-runasany",
"oc adm policy add-scc-to-user my-scc -z fsgroup-runasany",
"oc adm policy add-scc-to-user privileged -z fsgroup-runasany",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: <pipeline-run-name> spec: pipelineRef: name: <pipeline-cluster-task-name> taskRunTemplate: serviceAccountName: 'fsgroup-runasany'",
"apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: <task-run-name> spec: taskRef: name: <cluster-task-name> taskRunTemplate: serviceAccountName: 'fsgroup-runasany'",
"service.beta.openshift.io/serving-cert-secret-name=<secret_name>",
"oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: <hostname> to: kind: Service name: frontend 2 tls: termination: reencrypt 3 key: [as in edge termination] certificate: [as in edge termination] caCertificate: [as in edge termination] destinationCACertificate: |- 4 -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/01_binding.yaml",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/02_template.yaml",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/03_trigger.yaml",
"oc label namespace <ns-name> operator.tekton.dev/enable-annotation=enabled",
"oc create -f https://raw.githubusercontent.com/openshift/pipelines-tutorial/master/03_triggers/04_event_listener.yaml",
"oc create route reencrypt --service=<svc-name> --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=<hostname>",
"apiVersion: v1 kind: Secret metadata: name: git-secret-basic annotations: tekton.dev/git-0: github.com tekton.dev/git-1: gitlab.com type: kubernetes.io/basic-auth stringData: username: <username> 1 password: <password> 2",
"apiVersion: v1 kind: Secret metadata: name: git-secret-ssh annotations: tekton.dev/git-0: https://github.com type: kubernetes.io/ssh-auth stringData: ssh-privatekey: 1",
"apiVersion: v1 kind: Secret metadata: name: docker-secret-basic annotations: tekton.dev/docker-0: quay.io tekton.dev/docker-1: my-registry.example.com type: kubernetes.io/basic-auth stringData: username: <username> 1 password: <password> 2",
"oc create secret generic docker-secret-config --from-file=config.json=/home/user/.docker/config.json --type=kubernetes.io/dockerconfigjson",
"oc create secret docker-registry docker-secret-config --docker-email=<email> \\ 1 --docker-username=<username> \\ 2 --docker-password=<password> \\ 3 --docker-server=my-registry.example.com:5000 4",
"apiVersion: v1 kind: Secret metadata: name: basic-user-pass 1 annotations: tekton.dev/git-0: https://github.com type: kubernetes.io/basic-auth stringData: username: <username> 2 password: <password> 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: build-bot 1 secrets: - name: basic-user-pass 2",
"apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-push-task-run-2 1 spec: taskRunTemplate: serviceAccountName: build-bot 2 taskRef: name: build-push 3",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: demo-pipeline 1 namespace: default spec: taskRunTemplate: serviceAccountName: build-bot 2 pipelineRef: name: demo-pipeline 3",
"oc apply --filename secret.yaml,serviceaccount.yaml,run.yaml",
"apiVersion: v1 kind: Secret metadata: name: ssh-key 1 annotations: tekton.dev/git-0: github.com type: kubernetes.io/ssh-auth stringData: ssh-privatekey: 2 known_hosts: 3",
"apiVersion: v1 kind: ServiceAccount metadata: name: build-bot 1 secrets: - name: ssh-key 2",
"apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-push-task-run-2 1 spec: taskRunTemplate: serviceAccountName: build-bot 2 taskRef: name: build-push 3",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: demo-pipeline 1 namespace: default spec: taskRunTemplate: serviceAccountName: build-bot 2 pipelineRef: name: demo-pipeline 3",
"oc apply --filename secret.yaml,serviceaccount.yaml,run.yaml",
"oc create secret generic my-registry-credentials \\ 1 --from-file=config.json=/home/user/credentials/config.json 2",
"apiVersion: v1 kind: ServiceAccount metadata: name: container-bot 1 secrets: - name: my-registry-credentials 2",
"apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: build-container-task-run-2 1 spec: taskRunTemplate: serviceAccountName: container-bot 2 taskRef: name: build-container 3",
"apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: demo-pipeline 1 namespace: default spec: taskRunTemplate: serviceAccountName: container-bot 2 pipelineRef: name: demo-pipeline 3",
"oc apply --filename serviceaccount.yaml,run.yaml",
"apiVersion: tekton.dev/v1 kind: Task metadata: name: example-git-task spec: steps: - name: example-git-step script: ln -s USDHOME/.ssh /root/.ssh",
"oc create secret generic my-github-ssh-credentials \\ 1 --from-file=id_ed25519=/home/user/.ssh/id_ed25519 \\ 2 --from-file=known_hosts=/home/user/.ssh/known_hosts 3",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: git-clone spec: workspaces: - name: ssh-directory description: | A .ssh directory with private key, known_hosts, config, etc.",
"tkn task start <task_name> --workspace name=<workspace_name>,secret=<secret_name> 1 #",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: git-clone spec: workspaces: - name: output description: The git repo will be cloned onto the volume backing this Workspace. - name: ssh-directory description: | A .ssh directory with private key, known_hosts, config, etc. Copied to the user's home before git commands are executed. Used to authenticate with the git remote when performing the clone. Binding a Secret to this Workspace is strongly recommended over other volume types params: - name: url description: Repository URL to clone from. type: string - name: revision description: Revision to checkout. (branch, tag, sha, ref, etc...) type: string default: \"\" - name: gitInitImage description: The image providing the git-init binary that this Task runs. type: string default: \"gcr.io/tekton-releases/github.com/tektoncd/pipeline/cmd/git-init:v0.37.0\" results: - name: commit description: The precise commit SHA that was fetched by this Task. - name: url description: The precise URL that was fetched by this Task. steps: - name: clone image: \"USD(params.gitInitImage)\" script: | #!/usr/bin/env sh set -eu # This is necessary for recent version of git git config --global --add safe.directory '*' cp -R \"USD(workspaces.ssh-directory.path)\" \"USD{HOME}\"/.ssh 1 chmod 700 \"USD{HOME}\"/.ssh chmod -R 400 \"USD{HOME}\"/.ssh/* CHECKOUT_DIR=\"USD(workspaces.output.path)/\" /ko-app/git-init -url=\"USD(params.url)\" -revision=\"USD(params.revision)\" -path=\"USD{CHECKOUT_DIR}\" cd \"USD{CHECKOUT_DIR}\" RESULT_SHA=\"USD(git rev-parse HEAD)\" EXIT_CODE=\"USD?\" if [ \"USD{EXIT_CODE}\" != 0 ] ; then exit \"USD{EXIT_CODE}\" fi printf \"%s\" \"USD{RESULT_SHA}\" > \"USD(results.commit.path)\" printf \"%s\" \"USD(params.url)\" > \"USD(results.url.path)\"",
"tkn task start git-clone --param [email protected]:example-github-user/buildkit-tekton --workspace name=output,emptyDir=\"\" --workspace name=ssh-directory,secret=my-github-ssh-credentials --use-param-defaults --showlog",
"oc create secret generic my-registry-credentials \\ 1 --from-file=config.json=/home/user/credentials/config.json 2",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: skopeo-copy spec: workspaces: - name: dockerconfig description: Includes a docker `config.json`",
"tkn task start <task_name> --workspace name=<workspace_name>,secret=<secret_name> 1 #",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: skopeo-copy spec: workspaces: - name: dockerconfig 1 description: Includes a docker `config.json` steps: - name: clone image: quay.io/skopeo/stable:v1.8.0 env: - name: DOCKER_CONFIG value: USD(workspaces.dockerconfig.path) 2 script: | #!/usr/bin/env sh set -eu skopeo copy docker://docker.io/library/ubuntu:latest docker://quay.io/example_repository/ubuntu-copy:latest",
"tkn task start skopeo-copy --workspace name=dockerconfig,secret=my-registry-credentials --use-param-defaults --showlog",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: git-clone-build spec: workspaces: 1 - name: ssh-directory description: | A .ssh directory with private key, known_hosts, config, etc. steps: - name: clone workspaces: 2 - name: ssh-directory - name: build 3",
"oc get task buildah -n openshift-pipelines -o yaml | yq '. |= (del .metadata |= with_entries(select(.key == \"name\" )))' | yq '.kind=\"Task\"' | yq '.metadata.name=\"buildah-as-user\"' | oc create -f -",
"oc edit task buildah-as-user",
"apiVersion: tekton.dev/v1 kind: Task metadata: annotations: io.kubernetes.cri-o.userns-mode: 'auto:size=65536;map-to-root=true' io.openshift.builder: 'true' name: assemble-containerimage namespace: pipeline-namespace spec: description: This cluster task builds an image. stepTemplate: env: - name: HOME value: /tekton/home image: USD(params.builder-image) imagePullPolicy: IfNotPresent name: '' resources: limits: cpu: '1' memory: 4Gi requests: cpu: 100m memory: 2Gi securityContext: capabilities: add: - SETFCAP runAsNonRoot: true runAsUser: 1000 1 workingDir: USD(workspaces.working-directory.path)",
"apiVersion: v1 kind: ServiceAccount metadata: name: pipelines-sa-userid-1000 1 --- kind: SecurityContextConstraints metadata: annotations: name: pipelines-scc-userid-1000 2 allowHostDirVolumePlugin: false allowHostIPC: false allowHostNetwork: false allowHostPID: false allowHostPorts: false allowPrivilegeEscalation: true 3 allowPrivilegedContainer: false allowedCapabilities: null apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: MustRunAs groups: - system:cluster-admins priority: 10 readOnlyRootFilesystem: false requiredDropCapabilities: - MKNOD - KILL runAsUser: 4 type: MustRunAs uid: 1000 seLinuxContext: type: MustRunAs supplementalGroups: type: RunAsAny users: [] volumes: - configMap - downwardAPI - emptyDir - persistentVolumeClaim - projected - secret --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: pipelines-scc-userid-1000-clusterrole 5 rules: - apiGroups: - security.openshift.io resourceNames: - pipelines-scc-userid-1000 resources: - securitycontextconstraints verbs: - use --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: pipelines-scc-userid-1000-rolebinding 6 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: pipelines-scc-userid-1000-clusterrole subjects: - kind: ServiceAccount name: pipelines-sa-userid-1000",
"oc get task buildah -n openshift-pipelines -o yaml | yq '. |= (del .metadata |= with_entries(select(.key == \"name\" )))' | yq '.kind=\"Task\"' | yq '.metadata.name=\"buildah-as-user\"' | oc create -f -",
"oc edit task buildah-as-user",
"apiVersion: tekton.dev/v1 kind: Task metadata: name: buildah-as-user spec: description: >- Buildah task builds source into a container image and then pushes it to a container registry. Buildah Task builds source into a container image using Project Atomic's Buildah build tool.It uses Buildah's support for building from Dockerfiles, using its buildah bud command.This command executes the directives in the Dockerfile to assemble a container image, then pushes that image to a container registry. params: - name: IMAGE description: Reference of the image buildah will produce. - name: BUILDER_IMAGE description: The location of the buildah builder image. default: registry.redhat.io/rhel8/buildah@sha256:99cae35f40c7ec050fed3765b2b27e0b8bbea2aa2da7c16408e2ca13c60ff8ee - name: STORAGE_DRIVER description: Set buildah storage driver default: vfs - name: DOCKERFILE description: Path to the Dockerfile to build. default: ./Dockerfile - name: CONTEXT description: Path to the directory to use as context. default: . - name: TLSVERIFY description: Verify the TLS on the registry endpoint (for push/pull to a non-TLS registry) default: \"true\" - name: FORMAT description: The format of the built container, oci or docker default: \"oci\" - name: BUILD_EXTRA_ARGS description: Extra parameters passed for the build command when building images. default: \"\" - description: Extra parameters passed for the push command when pushing images. name: PUSH_EXTRA_ARGS type: string default: \"\" - description: Skip pushing the built image name: SKIP_PUSH type: string default: \"false\" results: - description: Digest of the image just built. name: IMAGE_DIGEST type: string workspaces: - name: source steps: - name: build securityContext: runAsUser: 1000 1 image: USD(params.BUILDER_IMAGE) workingDir: USD(workspaces.source.path) script: | echo \"Running as USER ID `id`\" 2 buildah --storage-driver=USD(params.STORAGE_DRIVER) bud USD(params.BUILD_EXTRA_ARGS) --format=USD(params.FORMAT) --tls-verify=USD(params.TLSVERIFY) --no-cache -f USD(params.DOCKERFILE) -t USD(params.IMAGE) USD(params.CONTEXT) [[ \"USD(params.SKIP_PUSH)\" == \"true\" ]] && echo \"Push skipped\" && exit 0 buildah --storage-driver=USD(params.STORAGE_DRIVER) push USD(params.PUSH_EXTRA_ARGS) --tls-verify=USD(params.TLSVERIFY) --digestfile USD(workspaces.source.path)/image-digest USD(params.IMAGE) docker://USD(params.IMAGE) cat USD(workspaces.source.path)/image-digest | tee /tekton/results/IMAGE_DIGEST volumeMounts: - name: varlibcontainers mountPath: /home/build/.local/share/containers 3 volumes: - name: varlibcontainers emptyDir: {}",
"apiVersion: v1 data: Dockerfile: | ARG BASE_IMG=registry.access.redhat.com/ubi9/ubi FROM USDBASE_IMG AS buildah-runner RUN dnf -y update && dnf -y install git && dnf clean all CMD git kind: ConfigMap metadata: name: dockerfile 1 --- apiVersion: tekton.dev/v1 kind: TaskRun metadata: name: buildah-as-user-1000 spec: taskRunTemplate: serviceAccountName: pipelines-sa-userid-1000 2 params: - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/test/buildahuser taskRef: kind: Task name: buildah-as-user workspaces: - configMap: name: dockerfile 3 name: source",
"apiVersion: tekton.dev/v1 kind: Pipeline metadata: name: pipeline-buildah-as-user-1000 spec: params: - name: IMAGE - name: URL workspaces: - name: shared-workspace - name: sslcertdir optional: true tasks: - name: fetch-repository 1 taskRef: resolver: cluster params: - name: kind value: task - name: name value: git-clone - name: namespace value: openshift-pipelines workspaces: - name: output workspace: shared-workspace params: - name: URL value: USD(params.URL) - name: SUBDIRECTORY value: \"\" - name: DELETE_EXISTING value: \"true\" - name: buildah taskRef: name: buildah-as-user 2 runAfter: - fetch-repository workspaces: - name: source workspace: shared-workspace - name: sslcertdir workspace: sslcertdir params: - name: IMAGE value: USD(params.IMAGE) --- apiVersion: tekton.dev/v1 kind: PipelineRun metadata: name: pipelinerun-buildah-as-user-1000 spec: taskRunSpecs: - pipelineTaskName: buildah taskServiceAccountName: pipelines-sa-userid-1000 3 params: - name: URL value: https://github.com/openshift/pipelines-vote-api - name: IMAGE value: image-registry.openshift-image-registry.svc:5000/test/buildahuser pipelineRef: name: pipeline-buildah-as-user-1000 workspaces: - name: shared-workspace 4 volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html-single/securing_openshift_pipelines/index |
2.4. SELinux States and Modes | 2.4. SELinux States and Modes SELinux can be either in the enabled or disabled state. When disabled, only DAC rules are used. When enabled, SELinux can run in one of the following modes: Enforcing: SELinux policy is enforced. SELinux denies access based on SELinux policy rules. Permissive: SELinux policy is not enforced. SELinux does not deny access, but denials are logged for actions that would have been denied if running in enforcing mode. Use the setenforce utility to change between enforcing and permissive mode. Changes made with setenforce do not persist across reboots. To change to enforcing mode, as the Linux root user, run the setenforce 1 command. To change to permissive mode, run the setenforce 0 command. Use the getenforce utility to view the current SELinux mode: Persistent states and modes changes are covered in Section 5.4, "Permanent Changes in SELinux States and Modes" . | [
"~]# getenforce Enforcing",
"~]# setenforce 0 ~]# getenforce Permissive",
"~]# setenforce 1 ~]# getenforce Enforcing"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-introduction-selinux_modes |
Chapter 13. Updating Drivers During Installation on IBM Power Systems Servers | Chapter 13. Updating Drivers During Installation on IBM Power Systems Servers In most cases, Red Hat Enterprise Linux already includes drivers for the devices that make up your system. However, if your system contains hardware that has been released very recently, drivers for this hardware might not yet be included. Sometimes, a driver update that provides support for a new device might be available from Red Hat or your hardware vendor on a driver disc that contains rpm packages . Typically, the driver disc is available for download as an ISO image file . Often, you do not need the new hardware during the installation process. For example, if you use a DVD to install to a local hard drive, the installation will succeed even if drivers for your network card are not available. In situations like this, complete the installation and add support for the piece of hardware afterward - refer to Section 35.1.1, "Driver Update rpm Packages" for details of adding this support. In other situations, you might want to add drivers for a device during the installation process to support a particular configuration. For example, you might want to install drivers for a network device or a storage adapter card to give the installer access to the storage devices that your system uses. You can use a driver disc to add this support during installation in one of two ways: place the ISO image file of the driver disc in a location accessible to the installer: on a local hard drive a USB flash drive create a driver disc by extracting the image file onto: a CD a DVD Refer to the instructions for making installation discs in Section 2.1, "Making an Installation DVD" for more information on burning ISO image files to CD or DVD. If Red Hat, your hardware vendor, or a trusted third party told you that you will require a driver update during the installation process, choose a method to supply the update from the methods described in this chapter and test it before beginning the installation. Conversely, do not perform a driver update during installation unless you are certain that your system requires it. Although installing an unnecessary driver update will not cause harm, the presence of a driver on a system for which it was not intended can complicate support. 13.1. Limitations of Driver Updates During Installation Unfortunately, some situations persist in which you cannot use a driver update to provide drivers during installation: Devices already in use You cannot use a driver update to replace drivers that the installation program has already loaded. Instead, you must complete the installation with the drivers that the installation program loaded and update to the new drivers after installation, or, if you need the new drivers for the installation process, consider performing an initial RAM disk driver update - refer to Section 13.2.3, "Preparing an Initial RAM Disk Update" . Devices with an equivalent device available Because all devices of the same type are initialized together, you cannot update drivers for a device if the installation program has loaded drivers for a similar device. For example, consider a system that has two different network adapters, one of which has a driver update available. The installation program will initialize both adapters at the same time, and therefore, you will not be able to use this driver update. Again, complete the installation with the drivers loaded by the installation program and update to the new drivers after installation, or use an initial RAM disk driver update. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/chap-Updating_drivers_during_installation_on_IBM_Power_Systems_servers |
Networking Guide | Networking Guide Red Hat OpenStack Platform 16.0 An advanced guide to Red Hat OpenStack Platform Networking OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_guide/index |
Chapter 9. Using config maps with applications | Chapter 9. Using config maps with applications Config maps allow you to decouple configuration artifacts from image content to keep containerized applications portable. The following sections define config maps and how to create and use them. 9.1. Understanding config maps Many applications require configuration by using some combination of configuration files, command line arguments, and environment variables. In OpenShift Container Platform, these configuration artifacts are decoupled from image content to keep containerized applications portable. The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Container Platform. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: ConfigMap Object Definition kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2 1 Contains the configuration data. 2 Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. Enter the file data in Base 64. Note You can use the binaryData field when you create a config map from a binary file, such as an image. Configuration data can be consumed in pods in a variety of ways. A config map can be used to: Populate environment variable values in containers Set command-line arguments in a container Populate configuration files in a volume Users and system components can store configuration data in a config map. A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information. Config map restrictions A config map must be created before its contents can be consumed in pods. Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis. ConfigMap objects reside in a project. They can only be referenced by pods in the same project. The Kubelet only supports the use of a config map for pods it gets from the API server. This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the OpenShift Container Platform node's --manifest-url flag, its --config flag, or its REST API because these are not common ways to create pods. Additional resources Creating and using config maps 9.2. Use cases: Consuming config maps in pods The following sections describe some uses cases when consuming ConfigMap objects in pods. 9.2.1. Populating environment variables in containers by using config maps You can use config maps to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names. As an example, consider the following config map: ConfigMap with two environment variables apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4 1 Name of the config map. 2 The project in which the config map resides. Config maps can only be referenced by pods in the same project. 3 4 Environment variables to inject. ConfigMap with one environment variable apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2 1 Name of the config map. 2 Environment variable to inject. Procedure You can consume the keys of this ConfigMap in a pod using configMapKeyRef sections. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Stanza to pull the specified environment variables from a ConfigMap . 2 Name of a pod environment variable that you are injecting a key's value into. 3 5 Name of the ConfigMap to pull specific environment variables from. 4 6 Environment variable to pull from the ConfigMap . 7 Makes the environment variable optional. As optional, the pod will be started even if the specified ConfigMap and keys do not exist. 8 Stanza to pull all environment variables from a ConfigMap . 9 Name of the ConfigMap to pull all environment variables from. When this pod is run, the pod logs will include the following output: Note SPECIAL_TYPE_KEY=charm is not listed in the example output because optional: true is set. 9.2.2. Setting command-line arguments for container commands with config maps You can use a config map to set the value of the commands or arguments in a container by using the Kubernetes substitution syntax USD(VAR_NAME) . As an example, consider the following config map: apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure To inject values into a command in a container, you must consume the keys you want to use as environment variables. Then you can refer to them in a container's command using the USD(VAR_NAME) syntax. Sample pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never 1 Inject the values into a command in a container using the keys you want to use as environment variables. When this pod is run, the output from the echo command run in the test-container container is as follows: 9.2.3. Injecting content into a volume by using config maps You can inject content into a volume by using config maps. Example ConfigMap custom resource (CR) apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure You have a couple different options for injecting content into a volume by using config maps. The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never 1 File containing key. When this pod is run, the output of the cat command will be: You can also control the paths within the volume where config map keys are projected: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/path/to/special-key" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never 1 Path to config map key. When this pod is run, the output of the cat command will be: | [
"kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4",
"apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"SPECIAL_LEVEL_KEY=very log_level=INFO",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never",
"very charm",
"apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never",
"very",
"apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never",
"very"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/building_applications/config-maps |
function::env_var | function::env_var Name function::env_var - Fetch environment variable from current process Synopsis Arguments name Name of the environment variable to fetch General Syntax evn_var:string(name:string) Description Returns the contents of the specified environment value for the current process. If the variable isn't set an empty string is returned. | [
"function env_var:string(name:string)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-env-var |
Chapter 23. Proxy [config.openshift.io/v1] | Chapter 23. Proxy [config.openshift.io/v1] Description Proxy holds cluster-wide information on how to configure default proxies for the cluster. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 23.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Spec holds user-settable values for the proxy configuration status object status holds observed values from the cluster. They may not be overridden. 23.1.1. .spec Description Spec holds user-settable values for the proxy configuration Type object Property Type Description httpProxy string httpProxy is the URL of the proxy for HTTP requests. Empty means unset and will not result in an env var. httpsProxy string httpsProxy is the URL of the proxy for HTTPS requests. Empty means unset and will not result in an env var. noProxy string noProxy is a comma-separated list of hostnames and/or CIDRs and/or IPs for which the proxy should not be used. Empty means unset and will not result in an env var. readinessEndpoints array (string) readinessEndpoints is a list of endpoints used to verify readiness of the proxy. trustedCA object trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: | -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- 23.1.2. .spec.trustedCA Description trustedCA is a reference to a ConfigMap containing a CA certificate bundle. The trustedCA field should only be consumed by a proxy validator. The validator is responsible for reading the certificate bundle from the required key "ca-bundle.crt", merging it with the system default trust bundle, and writing the merged trust bundle to a ConfigMap named "trusted-ca-bundle" in the "openshift-config-managed" namespace. Clients that expect to make proxy connections must use the trusted-ca-bundle for all HTTPS requests to the proxy, and may use the trusted-ca-bundle for non-proxy HTTPS requests as well. The namespace for the ConfigMap referenced by trustedCA is "openshift-config". Here is an example ConfigMap (in yaml): apiVersion: v1 kind: ConfigMap metadata: name: user-ca-bundle namespace: openshift-config data: ca-bundle.crt: \| -----BEGIN CERTIFICATE----- Custom CA certificate bundle. -----END CERTIFICATE----- Type object Required name Property Type Description name string name is the metadata.name of the referenced config map 23.1.3. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description httpProxy string httpProxy is the URL of the proxy for HTTP requests. httpsProxy string httpsProxy is the URL of the proxy for HTTPS requests. noProxy string noProxy is a comma-separated list of hostnames and/or CIDRs for which the proxy should not be used. 23.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/proxies DELETE : delete collection of Proxy GET : list objects of kind Proxy POST : create a Proxy /apis/config.openshift.io/v1/proxies/{name} DELETE : delete a Proxy GET : read the specified Proxy PATCH : partially update the specified Proxy PUT : replace the specified Proxy /apis/config.openshift.io/v1/proxies/{name}/status GET : read status of the specified Proxy PATCH : partially update status of the specified Proxy PUT : replace status of the specified Proxy 23.2.1. /apis/config.openshift.io/v1/proxies HTTP method DELETE Description delete collection of Proxy Table 23.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Proxy Table 23.2. HTTP responses HTTP code Reponse body 200 - OK ProxyList schema 401 - Unauthorized Empty HTTP method POST Description create a Proxy Table 23.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.4. Body parameters Parameter Type Description body Proxy schema Table 23.5. HTTP responses HTTP code Reponse body 200 - OK Proxy schema 201 - Created Proxy schema 202 - Accepted Proxy schema 401 - Unauthorized Empty 23.2.2. /apis/config.openshift.io/v1/proxies/{name} Table 23.6. Global path parameters Parameter Type Description name string name of the Proxy HTTP method DELETE Description delete a Proxy Table 23.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 23.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Proxy Table 23.9. HTTP responses HTTP code Reponse body 200 - OK Proxy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Proxy Table 23.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.11. HTTP responses HTTP code Reponse body 200 - OK Proxy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Proxy Table 23.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.13. Body parameters Parameter Type Description body Proxy schema Table 23.14. HTTP responses HTTP code Reponse body 200 - OK Proxy schema 201 - Created Proxy schema 401 - Unauthorized Empty 23.2.3. /apis/config.openshift.io/v1/proxies/{name}/status Table 23.15. Global path parameters Parameter Type Description name string name of the Proxy HTTP method GET Description read status of the specified Proxy Table 23.16. HTTP responses HTTP code Reponse body 200 - OK Proxy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Proxy Table 23.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.18. HTTP responses HTTP code Reponse body 200 - OK Proxy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Proxy Table 23.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 23.20. Body parameters Parameter Type Description body Proxy schema Table 23.21. HTTP responses HTTP code Reponse body 200 - OK Proxy schema 201 - Created Proxy schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/config_apis/proxy-config-openshift-io-v1 |
16.3. USB Devices | 16.3. USB Devices This section gives the commands required for handling USB devices. 16.3.1. Assigning USB Devices to Guest Virtual Machines Most devices such as web cameras, card readers, disk drives, keyboards, mice are connected to a computer using a USB port and cable. There are two ways to pass such devices to a guest virtual machine: Using USB passthrough - this requires the device to be physically connected to the host physical machine that is hosting the guest virtual machine. SPICE is not needed in this case. USB devices on the host can be passed to the guest in the command line or virt-manager . See Section 19.3.2, "Attaching USB Devices to a Guest Virtual Machine" for virt manager directions. Note that the virt-manager directions are not suitable for hot plugging or hot unplugging devices. If you want to hot plug/or hot unplug a USB device, see Procedure 20.4, "Hot plugging USB devices for use by the guest virtual machine" . Using USB re-direction - USB re-direction is best used in cases where there is a host physical machine that is running in a data center. The user connects to his/her guest virtual machine from a local machine or thin client. On this local machine there is a SPICE client. The user can attach any USB device to the thin client and the SPICE client will redirect the device to the host physical machine on the data center so it can be used by the guest virtual machine that is running on the thin client. For instructions via the virt-manager see Section 19.3.3, "USB Redirection" . 16.3.2. Setting a Limit on USB Device Redirection To filter out certain devices from redirection, pass the filter property to -device usb-redir . The filter property takes a string consisting of filter rules, the format for a rule is: Use the value -1 to designate it to accept any value for a particular field. You may use multiple rules on the same command line using | as a separator. Note that if a device matches none of the passed in rules, redirecting it will not be allowed! Example 16.1. An example of limiting redirection with a guest virtual machine Prepare a guest virtual machine. Add the following code excerpt to the guest virtual machine's' domain XML file: Start the guest virtual machine and confirm the setting changes by running the following: Plug a USB device into a host physical machine, and use virt-manager to connect to the guest virtual machine. Click USB device selection in the menu, which will produce the following message: "Some USB devices are blocked by host policy". Click OK to confirm and continue. The filter takes effect. To make sure that the filter captures properly check the USB device vendor and product, then make the following changes in the host physical machine's domain XML to allow for USB redirection. Restart the guest virtual machine, then use virt-viewer to connect to the guest virtual machine. The USB device will now redirect traffic to the guest virtual machine. | [
"<class>:<vendor>:<product>:<version>:<allow>",
"<redirdev bus='usb' type='spicevmc'> <alias name='redir0'/> <address type='usb' bus='0' port='3'/> </redirdev> <redirfilter> <usbdev class='0x08' vendor='0x1234' product='0xBEEF' version='2.0' allow='yes'/> <usbdev class='-1' vendor='-1' product='-1' version='-1' allow='no'/> </redirfilter>",
"ps -ef | grep USDguest_name",
"-device usb-redir,chardev=charredir0,id=redir0, / filter=0x08:0x1234:0xBEEF:0x0200:1|-1:-1:-1:-1:0,bus=usb.0,port=3",
"<redirfilter> <usbdev class='0x08' vendor='0x0951' product='0x1625' version='2.0' allow='yes'/> <usbdev allow='no'/> </redirfilter>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Guest_virtual_machine_device_configuration-USB_devices |
9.3. Configuration Tools | 9.3. Configuration Tools Red Hat Enterprise Linux provides a number of tools to assist administrators in configuring the system. This section outlines the available tools and provides examples of how they can be used to solve network related performance problems in Red Hat Enterprise Linux 7. However, it is important to keep in mind that network performance problems are sometimes the result of hardware malfunction or faulty infrastructure. Red Hat highly recommends verifying that your hardware and infrastructure are working as expected before using these tools to tune the network stack. Further, some network performance problems are better resolved by altering the application than by reconfiguring your network subsystem. It is generally a good idea to configure your application to perform frequent posix calls, even if this means queuing data in the application space, as this allows data to be stored flexibly and swapped in or out of memory as required. 9.3.1. Tuned Profiles for Network Performance The Tuned service provides a number of different profiles to improve performance in a number of specific use cases. The following profiles can be useful for improving networking performance. latency-performance network-latency network-throughput For more information about these profiles, see Section A.5, "tuned-adm" . 9.3.2. Configuring the Hardware Buffer If a large number of packets are being dropped by the hardware buffer, there are a number of potential solutions. Slow the input traffic Filter incoming traffic, reduce the number of joined multicast groups, or reduce the amount of broadcast traffic to decrease the rate at which the queue fills. For details of how to filter incoming traffic, see the Red Hat Enterprise Linux 7 Security Guide . For details about multicast groups, see the Red Hat Enterprise Linux 7 Clustering documentation. For details about broadcast traffic, see the Red Hat Enterprise Linux 7 System Administrator's Guide , or documentation related to the device you want to configure. Resize the hardware buffer queue Reduce the number of packets being dropped by increasing the size of the queue so that the it does not overflow as easily. You can modify the rx/tx parameters of the network device with the ethtool command: Change the drain rate of the queue Device weight refers to the number of packets a device can receive at one time (in a single scheduled processor access). You can increase the rate at which a queue is drained by increasing its device weight, which is controlled by the dev_weight parameter. This parameter can be temporarily altered by changing the contents of the /proc/sys/net/core/dev_weight file, or permanently altered with sysctl , which is provided by the procps-ng package. Altering the drain rate of a queue is usually the simplest way to mitigate poor network performance. However, increasing the number of packets that a device can receive at one time uses additional processor time, during which no other processes can be scheduled, so this can cause other performance problems. 9.3.3. Configuring Interrupt Queues If analysis reveals high latency, your system may benefit from poll-based rather than interrupt-based packet receipt. 9.3.3.1. Configuring Busy Polling Busy polling helps reduce latency in the network receive path by allowing socket layer code to poll the receive queue of a network device, and disabling network interrupts. This removes delays caused by the interrupt and the resultant context switch. However, it also increases CPU utilization. Busy polling also prevents the CPU from sleeping, which can incur additional power consumption. Busy polling is disabled by default. To enable busy polling on specific sockets, do the following. Set sysctl.net.core.busy_poll to a value other than 0 . This parameter controls the number of microseconds to wait for packets on the device queue for socket poll and selects. Red Hat recommends a value of 50 . Add the SO_BUSY_POLL socket option to the socket. To enable busy polling globally, you must also set sysctl.net.core.busy_read to a value other than 0 . This parameter controls the number of microseconds to wait for packets on the device queue for socket reads. It also sets the default value of the SO_BUSY_POLL option. Red Hat recommends a value of 50 for a small number of sockets, and a value of 100 for large numbers of sockets. For extremely large numbers of sockets (more than several hundred), use epoll instead. Busy polling behavior is supported by the following drivers. These drivers are also supported on Red Hat Enterprise Linux 7. bnx2x be2net ixgbe mlx4 myri10ge As of Red Hat Enterprise Linux 7.1, you can also run the following command to check whether a specific device supports busy polling. If this returns busy-poll: on [fixed] , busy polling is available on the device. 9.3.4. Configuring Socket Receive Queues If analysis suggests that packets are being dropped because the drain rate of a socket queue is too slow, there are several ways to alleviate the performance issues that result. Decrease the speed of incoming traffic Decrease the rate at which the queue fills by filtering or dropping packets before they reach the queue, or by lowering the weight of the device. Increase the depth of the application's socket queue If a socket queue that receives a limited amount of traffic in bursts, increasing the depth of the socket queue to match the size of the bursts of traffic may prevent packets from being dropped. 9.3.4.1. Decrease the Speed of Incoming Traffic Filter incoming traffic or lower the network interface card's device weight to slow incoming traffic. For details of how to filter incoming traffic, see the Red Hat Enterprise Linux 7 Security Guide . Device weight refers to the number of packets a device can receive at one time (in a single scheduled processor access). Device weight is controlled by the dev_weight parameter. This parameter can be temporarily altered by changing the contents of the /proc/sys/net/core/dev_weight file, or permanently altered with sysctl , which is provided by the procps-ng package. 9.3.4.2. Increasing Queue Depth Increasing the depth of an application socket queue is typically the easiest way to improve the drain rate of a socket queue, but it is unlikely to be a long-term solution. To increase the depth of a queue, increase the size of the socket receive buffer by making either of the following changes: Increase the value of /proc/sys/net/core/rmem_default This parameter controls the default size of the receive buffer used by sockets. This value must be smaller than or equal to the value of /proc/sys/net/core/rmem_max . Use setsockopt to configure a larger SO_RCVBUF value This parameter controls the maximum size in bytes of a socket's receive buffer. Use the getsockopt system call to determine the current value of the buffer. For further information, see the socket (7) manual page. 9.3.5. Configuring Receive-Side Scaling (RSS) Receive-Side Scaling (RSS), also known as multi-queue receive, distributes network receive processing across several hardware-based receive queues, allowing inbound network traffic to be processed by multiple CPUs. RSS can be used to relieve bottlenecks in receive interrupt processing caused by overloading a single CPU, and to reduce network latency. To determine whether your network interface card supports RSS, check whether multiple interrupt request queues are associated with the interface in /proc/interrupts . For example, if you are interested in the p1p1 interface: The preceding output shows that the NIC driver created 6 receive queues for the p1p1 interface ( p1p1-0 through p1p1-5 ). It also shows how many interrupts were processed by each queue, and which CPU serviced the interrupt. In this case, there are 6 queues because by default, this particular NIC driver creates one queue per CPU, and this system has 6 CPUs. This is a fairly common pattern among NIC drivers. Alternatively, you can check the output of ls -1 /sys/devices/*/*/ device_pci_address /msi_irqs after the network driver is loaded. For example, if you are interested in a device with a PCI address of 0000:01:00.0 , you can list the interrupt request queues of that device with the following command: RSS is enabled by default. The number of queues (or the CPUs that should process network activity) for RSS are configured in the appropriate network device driver. For the bnx2x driver, it is configured in num_queues . For the sfc driver, it is configured in the rss_cpus parameter. Regardless, it is typically configured in /sys/class/net/ device /queues/ rx-queue / , where device is the name of the network device (such as eth1 ) and rx-queue is the name of the appropriate receive queue. When configuring RSS, Red Hat recommends limiting the number of queues to one per physical CPU core. Hyper-threads are often represented as separate cores in analysis tools, but configuring queues for all cores including logical cores such as hyper-threads has not proven beneficial to network performance. When enabled, RSS distributes network processing equally between available CPUs based on the amount of processing each CPU has queued. However, you can use the ethtool --show-rxfh-indir and --set-rxfh-indir parameters to modify how network activity is distributed, and weight certain types of network activity as more important than others. The irqbalance daemon can be used in conjunction with RSS to reduce the likelihood of cross-node memory transfers and cache line bouncing. This lowers the latency of processing network packets. 9.3.6. Configuring Receive Packet Steering (RPS) Receive Packet Steering (RPS) is similar to RSS in that it is used to direct packets to specific CPUs for processing. However, RPS is implemented at the software level, and helps to prevent the hardware queue of a single network interface card from becoming a bottleneck in network traffic. RPS has several advantages over hardware-based RSS: RPS can be used with any network interface card. It is easy to add software filters to RPS to deal with new protocols. RPS does not increase the hardware interrupt rate of the network device. However, it does introduce inter-processor interrupts. RPS is configured per network device and receive queue, in the /sys/class/net/ device /queues/ rx-queue /rps_cpus file, where device is the name of the network device (such as eth0 ) and rx-queue is the name of the appropriate receive queue (such as rx-0 ). The default value of the rps_cpus file is 0 . This disables RPS, so the CPU that handles the network interrupt also processes the packet. To enable RPS, configure the appropriate rps_cpus file with the CPUs that should process packets from the specified network device and receive queue. The rps_cpus files use comma-delimited CPU bitmaps. Therefore, to allow a CPU to handle interrupts for the receive queue on an interface, set the value of their positions in the bitmap to 1. For example, to handle interrupts with CPUs 0, 1, 2, and 3, set the value of rps_cpus to f , which is the hexadecimal value for 15. In binary representation, 15 is 00001111 (1+2+4+8). For network devices with single transmit queues, best performance can be achieved by configuring RPS to use CPUs in the same memory domain. On non-NUMA systems, this means that all available CPUs can be used. If the network interrupt rate is extremely high, excluding the CPU that handles network interrupts may also improve performance. For network devices with multiple queues, there is typically no benefit to configuring both RPS and RSS, as RSS is configured to map a CPU to each receive queue by default. However, RPS may still be beneficial if there are fewer hardware queues than CPUs, and RPS is configured to use CPUs in the same memory domain. 9.3.7. Configuring Receive Flow Steering (RFS) Receive Flow Steering (RFS) extends RPS behavior to increase the CPU cache hit rate and thereby reduce network latency. Where RPS forwards packets based solely on queue length, RFS uses the RPS back end to calculate the most appropriate CPU, then forwards packets based on the location of the application consuming the packet. This increases CPU cache efficiency. RFS is disabled by default. To enable RFS, you must edit two files: /proc/sys/net/core/rps_sock_flow_entries Set the value of this file to the maximum expected number of concurrently active connections. We recommend a value of 32768 for moderate server loads. All values entered are rounded up to the nearest power of 2 in practice. /sys/class/net/ device /queues/ rx-queue /rps_flow_cnt Replace device with the name of the network device you wish to configure (for example, eth0 ), and rx-queue with the receive queue you wish to configure (for example, rx-0 ). Set the value of this file to the value of rps_sock_flow_entries divided by N , where N is the number of receive queues on a device. For example, if rps_flow_entries is set to 32768 and there are 16 configured receive queues, rps_flow_cnt should be set to 2048 . For single-queue devices, the value of rps_flow_cnt is the same as the value of rps_sock_flow_entries . Data received from a single sender is not sent to more than one CPU. If the amount of data received from a single sender is greater than a single CPU can handle, configure a larger frame size to reduce the number of interrupts and therefore the amount of processing work for the CPU. Alternatively, consider NIC offload options or faster CPUs. Consider using numactl or taskset in conjunction with RFS to pin applications to specific cores, sockets, or NUMA nodes. This can help prevent packets from being processed out of order. 9.3.8. Configuring Accelerated RFS Accelerated RFS boosts the speed of RFS by adding hardware assistance. Like RFS, packets are forwarded based on the location of the application consuming the packet. Unlike traditional RFS, however, packets are sent directly to a CPU that is local to the thread consuming the data: either the CPU that is executing the application, or a CPU local to that CPU in the cache hierarchy. Accelerated RFS is only available if the following conditions are met: Accelerated RFS must be supported by the network interface card. Accelerated RFS is supported by cards that export the ndo_rx_flow_steer() netdevice function. ntuple filtering must be enabled. Once these conditions are met, CPU to queue mapping is deduced automatically based on traditional RFS configuration. That is, CPU to queue mapping is deduced based on the IRQ affinities configured by the driver for each receive queue. Refer to Section 9.3.7, "Configuring Receive Flow Steering (RFS)" for details on configuring traditional RFS. Red Hat recommends using accelerated RFS wherever using RFS is appropriate and the network interface card supports hardware acceleration. | [
"ethtool --set-ring devname value",
"ethtool -k device | grep \"busy-poll\"",
"egrep 'CPU|p1p1' /proc/interrupts CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 89: 40187 0 0 0 0 0 IR-PCI-MSI-edge p1p1-0 90: 0 790 0 0 0 0 IR-PCI-MSI-edge p1p1-1 91: 0 0 959 0 0 0 IR-PCI-MSI-edge p1p1-2 92: 0 0 0 3310 0 0 IR-PCI-MSI-edge p1p1-3 93: 0 0 0 0 622 0 IR-PCI-MSI-edge p1p1-4 94: 0 0 0 0 0 2475 IR-PCI-MSI-edge p1p1-5",
"ls -1 /sys/devices/*/*/0000:01:00.0/msi_irqs 101 102 103 104 105 106 107 108 109"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/sect-red_hat_enterprise_linux-performance_tuning_guide-networking-configuration_tools |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/making-open-source-more-inclusive |
Chapter 3. Updating the overcloud | Chapter 3. Updating the overcloud After you update the undercloud, you can update the overcloud by running the overcloud and container image preparation commands, and updating your nodes. The control plane API is fully available during a minor update. Prerequisites You have updated the undercloud node to the latest version. For more information, see Chapter 2, Updating the undercloud . If you use a local set of core templates in your stack user home directory, ensure that you update the templates and use the recommended workflow in Understanding heat templates in the Customizing your Red Hat OpenStack Platform deployment guide. You must update the local copy before you upgrade the overcloud. Add the GlanceApiInternal service to your Controller role: This is the service for the internal instance of the Image service (glance) API to provide location data to administrators and other services that require it, such as the Block Storage service (cinder) and the Compute service (nova). Procedure To update the overcloud, you must complete the following procedures: Section 3.1, "Running the overcloud update preparation" Section 3.2, "Running the container image preparation" Section 3.3, "Optional: Updating the ovn-controller container on all overcloud servers" Section 3.4, "Updating the container image names of Pacemaker-controlled services" Section 3.5, "Updating all Controller nodes" Section 3.7, "Updating all Compute nodes" Section 3.8, "Updating all HCI Compute nodes" Section 3.9, "Updating all DistributedComputeHCI nodes" Section 3.10, "Updating all Ceph Storage nodes" Section 3.11, "Updating the Red Hat Ceph Storage cluster" Section 3.13, "Performing online database updates" Section 3.14, "Re-enabling fencing in the overcloud" 3.1. Running the overcloud update preparation To prepare the overcloud for the update process, you must run the openstack overcloud update prepare command, which updates the overcloud plan to Red Hat OpenStack Platform (RHOSP) 17.1 and prepares the nodes for the update. Prerequisites If you use a Ceph subscription and have configured director to use the overcloud-minimal image for Ceph storage nodes, you must ensure that in the roles_data.yaml role definition file, the rhsm_enforce parameter is set to False . If you rendered custom NIC templates, you must regenerate the templates with the updated version of the openstack-tripleo-heat-templates collection to avoid incompatibility with the overcloud version. For more information about custom NIC templates, see Defining custom network interface templates in the Customizing your Red Hat OpenStack Platform deployment guide. Note For distributed compute node (edge) architectures with OVN deployments, you must complete this procedure for each stack with Compute, DistributedCompute, or DistributedComputeHCI nodes before proceeding with section Updating the ovn-controller container on all overcloud servers . Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update preparation command: USD openstack overcloud update prepare \ --templates \ --stack <stack_name> \ -r <roles_data_file> \ -n <network_data_file> \ -e <environment_file> \ -e <environment_file> \ ... Include the following options relevant to your environment: If the name of your overcloud stack is different to the default name overcloud , include the --stack option in the update preparation command and replace <stack_name> with the name of your stack. If you use your own custom roles, use the -r option to include the custom roles ( <roles_data_file> ) file. If you use custom networks, use the -n option to include your composable network in the ( <network_data_file> ) file. If you deploy a high availability cluster, include the --ntp-server option in the update preparation command, or include the NtpServer parameter and value in your environment file. Include any custom configuration environment files with the -e option. Wait until the update preparation process completes. 3.2. Running the container image preparation Before you can update the overcloud, you must prepare all container image configurations that are required for your environment and pull the latest RHOSP 17.1 container images to your undercloud. To complete the container image preparation, you must run the openstack overcloud external-update run command against tasks that have the container_image_prepare tag. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the openstack overcloud external-update run command against tasks that have the container_image_prepare tag: USD openstack overcloud external-update run --stack <stack_name> --tags container_image_prepare If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. 3.3. Optional: Updating the ovn-controller container on all overcloud servers If you deployed your overcloud with the Modular Layer 2 Open Virtual Network mechanism driver (ML2/OVN), update the ovn-controller container to the latest RHOSP 17.1 version. The update occurs on every overcloud server that runs the ovn-controller container. The following procedure updates the ovn-controller containers on servers that are assigned the Compute role before it updates the ovn-northd service on servers that are assigned the Controller role. For distributed compute node (edge) architectures, you must complete this procedure for each stack with Compute, DistributedCompute, or DistributedComputeHCI nodes before proceeding with section Updating all Controller nodes . If you accidentally updated the ovn-northd service before following this procedure, you might not be able to connect to your virtual machines or create new virtual machines or virtual networks. The following procedure restores connectivity. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the openstack overcloud external-update run command against the tasks that have the ovn tag: USD openstack overcloud external-update run --stack <stack_name> --tags ovn If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the ovn-controller container update completes. 3.4. Updating the container image names of Pacemaker-controlled services If you update your system from Red Hat Openstack Platform (RHOSP) 17 to RHOSP 17.1, you must update the container image names of the Pacemaker-controlled services. You must perform this update to migrate to the new image naming schema of the Pacemaker-controlled services. If you update your system from a version of RHOSP 17.1 to the latest version of RHOSP 17.1, you do not need to update the container image names of the pacemaker-controlled services. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: USD source ~/stackrc Run the openstack overcloud external-update run command with the ha_image_update tag: USD openstack overcloud external-update run --stack <stack_name> --tags ha_image_update If the name of your undercloud stack is different from the default stack name undercloud, set your stack name with the --stack option and replace <stack_name> with the name of your stack. 3.5. Updating all Controller nodes Update all the Controller nodes to the latest RHOSP 17.1 version. Run the openstack overcloud update run command and include the --limit Controller option to restrict operations to the Controller nodes only. The control plane API is fully available during the minor update. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update command: USD openstack overcloud update run --stack <stack_name> --limit Controller If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the Controller node update completes. 3.6. Updating composable roles with non-Pacemaker services Update composable roles with non-Pacemaker services to the latest RHOSP 17.1 version. Update the nodes in each composable role one at a time. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update command: USD openstack overcloud update run --stack <stack_name> --limit <non_pcs_role_0> USD openstack overcloud update run --stack <stack_name> --limit <non_pcs_role_1> USD openstack overcloud update run --stack <stack_name> --limit <non_pcs_role_2> If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Replace <non_pcs_role_0> , <non_pcs_role_1> , and <non_pcs_role_2> with the names of your composable roles with non-Pacemaker services. Wait until the update completes. 3.7. Updating all Compute nodes Update all Compute nodes to the latest RHOSP 17.1 version. To update Compute nodes, run the openstack overcloud update run command and include the --limit Compute option to restrict operations to the Compute nodes only. Parallelization considerations When you update a large number of Compute nodes, to improve performance, you can run multiple update tasks in the background and configure each task to update a separate group of 20 nodes. For example, if you have 80 Compute nodes in your deployment, you can run the following commands to update the Compute nodes in parallel: USD openstack overcloud update run -y --limit 'Compute[0:19]' > update-compute-0-19.log 2>&1 & USD openstack overcloud update run -y --limit 'Compute[20:39]' > update-compute-20-39.log 2>&1 & USD openstack overcloud update run -y --limit 'Compute[40:59]' > update-compute-40-59.log 2>&1 & USD openstack overcloud update run -y --limit 'Compute[60:79]' > update-compute-60-79.log 2>&1 & This method of partitioning the nodes space is random and you do not have control over which nodes are updated. The selection of nodes is based on the inventory file that you generate when you run the tripleo-ansible-inventory command. To update specific Compute nodes, list the nodes that you want to update in a batch separated by a comma: Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update command: USD openstack overcloud update run --stack <stack_name> --limit Compute If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the Compute node update completes. 3.8. Updating all HCI Compute nodes Update the Hyperconverged Infrastructure (HCI) Compute nodes to the latest RHOSP 17.1 version. Prerequisites On a Ceph Monitor or Controller node that is running the ceph-mon service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status is active+clean : USD sudo cephadm -- shell ceph status If the Ceph cluster is healthy, it returns a status of HEALTH_OK . If the Ceph cluster status is unhealthy, it returns a status of HEALTH_WARN or HEALTH_ERR . For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or the Red Hat Ceph Storage 6 Troubleshooting Guide Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update command: USD openstack overcloud update run --stack <stack_name> --limit ComputeHCI If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the node update completes. 3.9. Updating all DistributedComputeHCI nodes Update roles specific to distributed compute node architecture. When you upgrade distributed compute nodes, update DistributedComputeHCI nodes first, and then update DistributedComputeHCIScaleOut nodes. Prerequisites On a Ceph Monitor or Controller node that is running the ceph-mon service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status is active+clean : USD sudo cephadm -- shell ceph status If the Ceph cluster is healthy, it returns a status of HEALTH_OK . If the Ceph cluster status is unhealthy, it returns a status of HEALTH_WARN or HEALTH_ERR . For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or Red Hat Ceph Storage 6 Troubleshooting Guide . Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update command: If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the DistributedComputeHCI node update completes. Use the same process to update DistributedComputeHCIScaleOut nodes. 3.10. Updating all Ceph Storage nodes Update the Red Hat Ceph Storage nodes to the latest RHOSP 17.1 version. Important RHOSP 17.1 is supported on RHEL 9.2. However, hosts that are mapped to the Ceph Storage role update to the latest major RHEL release. For more information, see Red Hat Ceph Storage: Supported configurations . Prerequisites On a Ceph Monitor or Controller node that is running the ceph-mon service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status is active+clean : USD sudo cephadm -- shell ceph status If the Ceph cluster is healthy, it returns a status of HEALTH_OK . If the Ceph cluster status is unhealthy, it returns a status of HEALTH_WARN or HEALTH_ERR . For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or the Red Hat Ceph Storage 6 Troubleshooting Guide . Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the update command: USD openstack overcloud update run --stack <stack_name> --limit CephStorage If the name of your overcloud stack is different from the default stack name overcloud , set your stack name with the --stack option and replace <stack_name> with the name of your stack. Wait until the node update completes. 3.11. Updating the Red Hat Ceph Storage cluster Update the director-deployed Red Hat Ceph Storage cluster to the latest version that is compatible with Red Hat OpenStack Platform (RHOSP) 17.1 by using the cephadm command. Update your Red Hat Ceph Storage cluster if one of the following scenarios applies to your environment: If you upgraded from RHOSP 16.2 to RHOSP 17.1, you run Red Hat Ceph Storage 5, and you are updating to a newer version of Red Hat Ceph Storage 5. If you newly deployed RHOSP 17.1, you run Red Hat Ceph Storage 6, and you are updating to a newer version of Red Hat Ceph Storage 6. Prerequisites Complete the container image preparation in Section 3.2, "Running the container image preparation" . Procedure Log in to a Controller node. Check the health of the cluster: USD sudo cephadm shell -- ceph health Note If the Ceph Storage cluster is healthy, the command returns a result of HEALTH_OK . If the command returns a different result, review the status of the cluster and contact Red Hat support before continuing the update. For more information, see Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage Upgrade Guide or Upgrade a Red Hat Ceph Storage cluster using cephadm in the Red Hat Ceph Storage 6 Upgrade Guide . Optional: Check which images should be included in the Ceph Storage cluster update: USD openstack tripleo container image list -f value | awk -F '//' '/ceph/ {print USD2}' Update the cluster to the latest Red Hat Ceph Storage version: USD sudo cephadm shell -- ceph orch upgrade start --image <image_name>: <version> Replace <image_name> with the name of the Ceph Storage cluster image. Replace <version> with the target version to which you are updating the Ceph Storage cluster. Wait until the Ceph Storage container update completes. To monitor the status of the update, run the following command: sudo cephadm shell -- ceph orch upgrade status 3.12. Upgrading to Red Hat Ceph Storage 7 Red Hat Ceph Storage 6 is deployed by default with Red Hat OpenStack Platform 17.1. When deployment is complete, Red Hat Ceph Storage can be upgraded to release 7. For information on this process, and procedures to complete the upgrade, see the section Director-deployed Red Hat Ceph Storage environments in the Upgrading Red Hat Ceph Storage 6 to 7 chapter of Framework for upgrades (16.2 to 17.1) . 3.13. Performing online database updates Some overcloud components require an online update or migration of their databases tables. To perform online database updates, run the openstack overcloud external-update run command against tasks that have the online_upgrade tag. Online database updates apply to the following components: OpenStack Block Storage (cinder) OpenStack Compute (nova) Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Run the openstack overcloud external-update run command against tasks that use the online_upgrade tag: USD openstack overcloud external-update run --stack <stack_name> --tags online_upgrade 3.14. Re-enabling fencing in the overcloud Before you updated the overcloud, you disabled fencing in Disabling fencing in the overcloud . After you update the overcloud, re-enable fencing to protect your data if a node fails. Procedure Log in to the undercloud host as the stack user. Source the stackrc undercloud credentials file: Log in to a Controller node and run the Pacemaker command to re-enable fencing: USD ssh tripleo-admin@<controller_ip> "sudo pcs property set stonith-enabled=true" Replace <controller_ip> with the IP address of a Controller node. You can find the IP addresses of your Controller nodes with the openstack server list command. If you use SBD fencing, reset the watchdog timer device interval to its original value before you disabled it: Replace <interval> with the original value of the watchdog timer device, for example, 10 . In the fencing.yaml environment file, set the EnableFencing parameter to true . Additional Resources Fencing Controller nodes with STONITH | [
"OS::TripleO::Services::GlanceApiInternal",
"source ~/stackrc",
"openstack overcloud update prepare --templates --stack <stack_name> -r <roles_data_file> -n <network_data_file> -e <environment_file> -e <environment_file>",
"source ~/stackrc",
"openstack overcloud external-update run --stack <stack_name> --tags container_image_prepare",
"source ~/stackrc",
"openstack overcloud external-update run --stack <stack_name> --tags ovn",
"source ~/stackrc",
"openstack overcloud external-update run --stack <stack_name> --tags ha_image_update",
"source ~/stackrc",
"openstack overcloud update run --stack <stack_name> --limit Controller",
"source ~/stackrc",
"openstack overcloud update run --stack <stack_name> --limit <non_pcs_role_0> openstack overcloud update run --stack <stack_name> --limit <non_pcs_role_1> openstack overcloud update run --stack <stack_name> --limit <non_pcs_role_2>",
"openstack overcloud update run -y --limit 'Compute[0:19]' > update-compute-0-19.log 2>&1 & openstack overcloud update run -y --limit 'Compute[20:39]' > update-compute-20-39.log 2>&1 & openstack overcloud update run -y --limit 'Compute[40:59]' > update-compute-40-59.log 2>&1 & openstack overcloud update run -y --limit 'Compute[60:79]' > update-compute-60-79.log 2>&1 &",
"openstack overcloud update run --limit <Compute0>,<Compute1>,<Compute2>,<Compute3>",
"source ~/stackrc",
"openstack overcloud update run --stack <stack_name> --limit Compute",
"sudo cephadm -- shell ceph status",
"source ~/stackrc",
"openstack overcloud update run --stack <stack_name> --limit ComputeHCI",
"sudo cephadm -- shell ceph status",
"source ~/stackrc",
"openstack overcloud update run --stack <stack_name> --limit DistributedComputeHCI",
"sudo cephadm -- shell ceph status",
"source ~/stackrc",
"openstack overcloud update run --stack <stack_name> --limit CephStorage",
"sudo cephadm shell -- ceph health",
"openstack tripleo container image list -f value | awk -F '//' '/ceph/ {print USD2}'",
"sudo cephadm shell -- ceph orch upgrade start --image <image_name>: <version>",
"sudo cephadm shell -- ceph orch upgrade status",
"source ~/stackrc",
"openstack overcloud external-update run --stack <stack_name> --tags online_upgrade",
"source ~/stackrc",
"ssh tripleo-admin@<controller_ip> \"sudo pcs property set stonith-enabled=true\"",
"pcs property set stonith-watchdog-timeout=<interval>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/performing_a_minor_update_of_red_hat_openstack_platform/assembly_updating-the-overcloud_keeping-updated |
Chapter 9. Publishing Certificates and CRLs | Chapter 9. Publishing Certificates and CRLs Red Hat Certificate System includes a customizable publishing framework for the Certificate Manager, enabling certificate authorities to publish certificates, certificate revocation lists (CRLs), and other certificate-related objects to any of the supported repositories: an LDAP-compliant directory, a flat file, and an online validation authority. This chapter explains how to configure a Certificate Manager to publish certificates and CRLs to a file, to a directory, and to the Online Certificate Status Manager. The general process to configure publishing is as follows: Configure publishing to a file, LDAP directory, or OCSP responder. There can be a single publisher or multiple publishers, depending on how many locations will be used. The locations can be split by certificates and CRLs or narrower definitions, such as certificate type. Rules determine which type to publish and to what location by being associated with the publisher. Set rules to determine what certificates are published to the locations. Any rule which a certificate or CRL matches is activated, so the same certificate can be published to a file and to an LDAP directory by matching a file-based rule and matching a directory-based rule. Rules can be set for each object type: CA certificates, CRLs, user certificates, and cross-pair certificates. Disable all rules that will not be used. Configure CRLs. CRLs must be configured before they can be published. See Chapter 7, Revoking Certificates and Issuing CRLs . Enable publishing after setting up publishers, mappers, and rules. Once publishing is enabled, the server starts publishing immediately. If the publishers, mappers, and rules are not completely configured, publishing may not work correctly or at all. 9.1. About Publishing The Certificate System is capable of publishing certificates to a file or an LDAP directory and of publishing CRLs to a file, an LDAP directory, or to an OCSP responder. For additional flexibility, specific types of certificates or CRLs can be published to a single format or all three. For example, CA certificates can be published only to a directory and not to a file, and user certificates can be published to both a file and a directory. Note An OCSP responder only provides information about CRLs; certificates are not published to an OCSP responder. Different publishing locations can be set for certificates files and CRL files, as well as different publishing locations for different types of certificates files or different types of CRL files. Similarly, different types of certificates and different types of CRLs can be published to different places in a directory. For example, certificates for users from the West Coast division of a company can be published in one branch of the directory, while certificates for users in the East Coast division can be published to another branch in the directory. When publishing is enabled, every time a certificate or a CRL is issued, updated, or revoked, the publishing system is invoked. The certificate or CRL is evaluated by the rules to see if it matches the type and predicate set in the rule. The type specifies if the object is a CRL, CA certificate, or any other certificate. The predicate sets more criteria for the type of object being evaluated. For example, it can specify user certificates, or it can specify West Coast user certificates. To use predicates, a value needs to be entered in the predicate field of the publishing rule, and a corresponding value (although formatted somewhat differently) needs to be contained in the certificate or certificate request to match. The value in the certificate or certificate request may be derived from information in the certificate, such as the type of certificate, or may be derived from a hidden value that is placed in the request form. If no predicate is set, all certificates of that type are considered to match. For example, all CRLs match the rule if CRL is set as the type. Every rule that is matched publishes the certificate or CRL according to the method and location specified in that rule. A given certificate or CRL can match no rules, one rule, more than one rule, or all rules. The publishing system attempts to match every certificate and CRL issued against all rules. When a rule is matched, the certificate or CRL is published according to the method and location specified in the publisher associated with that rule. For example, if a rule matches all certificates issued to users, and the rule has a publisher that publishes to a file in the location /etc/CS/certificates , the certificate is published as a file to that location. If another rule matches all certificates issued to users, and the rule has a publisher that publishes to the LDAP attribute userCertificate;binary attribute, the certificate is published to the directory specified when LDAP publishing was enabled in this attribute in the user's entry. For rules that specify to publish to a file, a new file is created when either a certificate or a CRL is issued in the stipulated directory. For rules that specify to publish to an LDAP directory, the certificate or CRL is published to the entry specified in the directory, in the attribute specified. The CA overwrites the values for any published certificate or CRL attribute with any subsequent certificate or CRL. Simply put, any existing certificate or CRL that is already published is replaced by the certificate or CRL. For rules that specify to publish to an Online Certificate Status Manager, a CRL is published to this manager. Certificates are not published to an Online Certificate Status Manager. For LDAP publishing, the location of the user's entry needs to be determined. Mappers are used to determine the entry to which to publish. The mappers can contain an exact DN for the entry, some variable that associates information that can be gotten from the certificate to create the DN, or enough information to search the directory for a unique attribute or set of attributes in the entry to ascertain the correct DN for the entry. When a certificate is revoked, the server uses the publishing rules to locate and delete the corresponding certificate from the LDAP directory or from the filesystem. When a certificate expires, the server can remove that certificate from the configured directory. The server does not do this automatically; the server must be configured to run the appropriate job. For details, see Chapter 13, Setting Automated Jobs . Setting up publishing involves configuring publishers, mappers, and rules. 9.1.1. Publishers Publishers specify the location to which certificates and CRLs are published. When publishing to a file, publishers specify the filesystem publishing directory. When publishing to an LDAP directory, publishers specify the attribute in the directory that stores the certificate or CRL; a mapper is used to determine the DN of the entry. For every DN, a different formula is set for deriving that DN. The location of the LDAP directory is specified when LDAP publishing is enabled. When publishing a CRL to an OCSP responder, publishers specify the hostname and URI of the Online Certificate Status Manager. 9.1.2. Mappers Mappers are only used in LDAP publishing. Mappers construct the DN for an entry based on information from the certificate or the certificate request. The server has information from the subject name of the certificate and the certificate request and needs to know how to use this information to create a DN for that entry. The mapper provides a formula for converting the information available either to a DN or to some unique information that can be searched in the directory to obtain a DN for the entry. 9.1.3. Rules Rules for file, LDAP, and OCSP publishing tell the server whether and how a certificate or CRL is to be published. A rule first defines what is to be published, a certificate or CRL matching certain characteristics, by setting a type and predicate for the rule. A rule then specifies the publishing method and location by being associated with a publisher and, for LDAP publishing, with a mapper. Rules can be as simple or complex as necessary for the PKI deployment and are flexible enough to accommodate different scenarios. 9.1.4. Publishing to Files The server can publish certificates and CRLs to flat files, which can then be imported into any repository, such as a relational database. When the server is configured to publish certificates and CRLs to file, the published files are DER-encoded binary blobs, base-64 encoded text blobs, or both. For each certificate the server issues, it creates a file that contains the certificate in either DER-encoded or base-64 encoded format. Each file is named either cert- serial_number .der or cert- serial_number .b64 . The serial_number is the serial number of the certificate contained in the file. For example, the filename for a DER-encoded certificate with the serial number 1234 is cert-1234.der . Every time the server generates a CRL, it creates a file that contains the new CRL in either DER-encoded or base-64 encoded format. Each file is named either issuing_point_name-this_update .der or issuing_point_name-this_update .b64 , depending on the format. The issuing_point_name identifies the CRL issuing point which published the CRL, and this_update specifies the value derived from the time-dependent update value for the CRL contained in the file. For example, the filename for a DER-encoded CRL with the value This Update: Friday January 28 15:36:00 PST 2020 , is MasterCRL-20200128-153600.der . 9.1.5. OCSP Publishing There are two forms of Certificate System OCSP services, an internal service for the Certificate Manager and the Online Certificate Status Manager. The internal service checks the internal database of the Certificate Manager to report on the status of a certificate. The internal service is not set for publishing; it uses the certificates stored in its internal database to determine the status of a certificate. The Online Certificate Status Manager checks CRLs sent to it by Certificate Manager. A publisher is set for each location a CRL is sent and one rule for each type of CRL sent. For detailed information on both OCSP services, see Section 7.6, "Using the Online Certificate Status Protocol (OCSP) Responder" . 9.1.6. LDAP Publishing In LDAP publishing , the server publishes the certificates, CRLs, and other certificate-related objects to a directory using LDAP or LDAPS. The branch of the directory to which it publishes is called the publishing directory . For each certificate the server issues, it creates a blob that contains the certificate in its DER-encoded format in the specified attribute of the user's entry. The certificate is published as a DER encoded binary blob. Every time the server generates a CRL, it creates a blob that contains the new CRL in its DER-encoded format in the specified attribute of the entry for the CA. The server can publish certificates and CRLs to an LDAP-compliant directory using the LDAP protocol or LDAP over SSL (LDAPS) protocol, and applications can retrieve the certificates and CRLs over HTTP. Support for retrieving certificates and CRLs over HTTP enables some browsers to import the latest CRL automatically from the directory that receives regular updates from the server. The browser can then use the CRL to check all certificates automatically to ensure that they have not been revoked. For LDAP publishing to work, the user entry must be present in the LDAP directory. If the server and publishing directory become out of sync for some reason, privileged users (administrators and agents) can also manually initiate the publishing process. For instructions, see Section 9.12.2, "Manually Updating the CRL in the Directory" . | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Publishing |
Chapter 7. Reviewing your Ansible configuration with automation content navigator | Chapter 7. Reviewing your Ansible configuration with automation content navigator As a content creator, you can review your Ansible configuration with automation content navigator and interactively delve into settings. 7.1. Reviewing your Ansible configuration from automation content navigator You can review your Ansible configuration with the automation content navigator text-based user interface in interactive mode and delve into the settings. Automation content navigator pulls in the results from an accessible Ansible configuration file, or returns the defaults if no configuration file is present. Prerequisites You have authenticated to the Red Hat registry if you need to access additional automation execution environments. See Red Hat Container Registry Authentication for details. Procedure Start automation content navigator USD ansible-navigator Optional: type ansible-navigator config from the command line to access the Ansible configuration settings. Review the Ansible configuration. :config Some values reflect settings from within the automation execution environments needed for the automation execution environments to function. These display as non-default settings you cannot set in your Ansible configuration file. Type the number corresponding to the setting you want to delve into, or type :<number> for numbers greater than 9. ANSIBLE COW ACCEPTLIST (current: ['bud-frogs', 'bunny', 'cheese']) (default: 0│--- 1│current: 2│- bud-frogs 3│- bunny 4│- cheese 5│default: 6│- bud-frogs 7│- bunny 8│- cheese 9│- daemon The output shows the current setting as well as the default . Note the source in this example is env since the setting comes from the automation execution environments. Verification Review the configuration output. Additional resources ansible-config . Introduction to Ansible configuration . | [
"ansible-navigator",
":config",
"ANSIBLE COW ACCEPTLIST (current: ['bud-frogs', 'bunny', 'cheese']) (default: 0│--- 1│current: 2│- bud-frogs 3│- bunny 4│- cheese 5│default: 6│- bud-frogs 7│- bunny 8│- cheese 9│- daemon"
] | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_content_navigator/assembly-review-config-navigator_ansible-navigator |
Server Guide | Server Guide Red Hat build of Keycloak 24.0 Red Hat Customer Content Services | [
"bin/kc.[sh|bat] start --db-url-host=mykeycloakdb",
"export KC_DB_URL_HOST=mykeycloakdb",
"db-url-host=mykeycloakdb",
"bin/kc.[sh|bat] start --help",
"db-url-host=USD{MY_DB_HOST}",
"db-url-host=USD{MY_DB_HOST:mydb}",
"bin/kc.[sh|bat] --config-file=/path/to/myconfig.conf start",
"keytool -importpass -alias kc.db-password -keystore keystore.p12 -storepass keystorepass -storetype PKCS12 -v",
"bin/kc.[sh|bat] start --config-keystore=/path/to/keystore.p12 --config-keystore-password=storepass --config-keystore-type=PKCS12",
"bin/kc.[sh|bat] start-dev",
"bin/kc.[sh|bat] start",
"bin/kc.[sh|bat] build <build-options>",
"bin/kc.[sh|bat] build --help",
"bin/kc.[sh|bat] build --db=postgres",
"bin/kc.[sh|bat] start --optimized <configuration-options>",
"bin/kc.[sh|bat] build --db=postgres",
"db-url-host=keycloak-postgres db-username=keycloak db-password=change_me hostname=mykeycloak.acme.com https-certificate-file",
"bin/kc.[sh|bat] start --optimized",
"export JAVA_OPTS_APPEND=\"-Djava.net.preferIPv4Stack=true\"",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder Enable health and metrics support ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true Configure a database vendor ENV KC_DB=postgres WORKDIR /opt/keycloak for demonstration purposes only, please make sure to use proper certificates in production instead RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname \"CN=server\" -alias server -ext \"SAN:c=DNS:localhost,IP:127.0.0.1\" -keystore conf/server.keystore RUN /opt/keycloak/bin/kc.sh build FROM registry.redhat.io/rhbk/keycloak-rhel9:24 COPY --from=builder /opt/keycloak/ /opt/keycloak/ change these values to point to a running postgres instance ENV KC_DB=postgres ENV KC_DB_URL=<DBURL> ENV KC_DB_USERNAME=<DBUSERNAME> ENV KC_DB_PASSWORD=<DBPASSWORD> ENV KC_HOSTNAME=localhost ENTRYPOINT [\"/opt/keycloak/bin/kc.sh\"]",
"A example build step that downloads a JAR file from a URL and adds it to the providers directory FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder Add the provider JAR file to the providers directory ADD --chown=keycloak:keycloak --chmod=644 <MY_PROVIDER_JAR_URL> /opt/keycloak/providers/myprovider.jar Context: RUN the build command RUN /opt/keycloak/bin/kc.sh build",
"FROM registry.access.redhat.com/ubi9 AS ubi-micro-build COPY mycertificate.crt /etc/pki/ca-trust/source/anchors/mycertificate.crt RUN update-ca-trust FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /etc/pki /etc/pki",
"FROM registry.access.redhat.com/ubi9 AS ubi-micro-build RUN mkdir -p /mnt/rootfs RUN dnf install --installroot /mnt/rootfs <package names go here> --releasever 9 --setopt install_weak_deps=false --nodocs -y && dnf --installroot /mnt/rootfs clean all && rpm --root /mnt/rootfs -e --nodeps setup FROM registry.redhat.io/rhbk/keycloak-rhel9 COPY --from=ubi-micro-build /mnt/rootfs /",
"build . -t mykeycloak",
"run --name mykeycloak -p 8443:8443 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me mykeycloak start --optimized",
"run --name mykeycloak -p 3000:8443 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me mykeycloak start --optimized --hostname-port=3000",
"run --name mykeycloak -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:24 start-dev",
"run --name mykeycloak -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me registry.redhat.io/rhbk/keycloak-rhel9:24 start --db=postgres --features=token-exchange --db-url=<JDBC-URL> --db-username=<DB-USER> --db-password=<DB-PASSWORD> --https-key-store-file=<file> --https-key-store-password=<password>",
"setting the admin username -e KEYCLOAK_ADMIN=<admin-user-name> setting the initial password -e KEYCLOAK_ADMIN_PASSWORD=change_me",
"run --name keycloak_unoptimized -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me -v /path/to/realm/data:/opt/keycloak/data/import registry.redhat.io/rhbk/keycloak-rhel9:24 start-dev --import-realm",
"run --name mykeycloak -p 8080:8080 -m 1g -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=change_me -e JAVA_OPTS_KC_HEAP=\"-XX:MaxHeapFreeRatio=30 -XX:MaxRAMPercentage=65\" registry.redhat.io/rhbk/keycloak-rhel9:24 start-dev",
"bin/kc.[sh|bat] start --https-certificate-file=/path/to/certfile.pem --https-certificate-key-file=/path/to/keyfile.pem",
"bin/kc.[sh|bat] start --https-key-store-file=/path/to/existing-keystore-file",
"bin/kc.[sh|bat] start --https-key-store-password=<value>",
"bin/kc.[sh|bat] start --https-protocols=<protocol>[,<protocol>]",
"bin/kc.[sh|bat] start --https-port=<port>",
"bin/kc.[sh|bat] start --https-trust-store-file=/path/to/file",
"bin/kc.[sh|bat] start --https-trust-store-password=<value>",
"bin/kc.[sh|bat] start --https-client-auth=<none|request|required>",
"bin/kc.[sh|bat] start --hostname=<host>",
"bin/kc.[sh|bat] start --hostname-url=<scheme>://<host>:<port>/<path>",
"bin/kc.[sh|bat] start --hostname=<value> --hostname-strict-backchannel=true",
"bin/kc.[sh|bat] start --hostname-admin=<host>",
"bin/kc.[sh|bat] start --hostname-admin-url=<scheme>://<host>:<port>/<path>",
"bin/kc.[sh|bat] start --hostname=mykeycloak --http-enabled=true --proxy-headers=forwarded|xforwarded",
"bin/kc.[sh|bat] start --hostname-url=https://mykeycloak",
"bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-strict-backchannel=true",
"bin/kc.[sh|bat] start --hostname-url=https://mykeycloak:8989",
"bin/kc.[sh|bat] start --proxy-headers=forwarded --https-port=8543 --hostname-url=https://my-keycloak.org:8443 --hostname-admin-url=https://admin.my-keycloak.org:8443",
"bin/kc.[sh|bat] start --hostname=mykeycloak --hostname-debug=true",
"bin/kc.[sh|bat] start --proxy-headers forwarded",
"bin/kc.[sh|bat] start --proxy <mode>",
"bin/kc.[sh|bat] start --proxy-headers=forwarded|xforwarded --hostname-strict=false",
"bin/kc.[sh|bat] start --spi-sticky-session-encoder-infinispan-should-attach-route=false",
"bin/kc.[sh|bat] build --spi-x509cert-lookup-provider=<provider>",
"bin/kc.[sh|bat] start --spi-x509cert-lookup-<provider>-ssl-client-cert=SSL_CLIENT_CERT --spi-x509cert-lookup-<provider>-ssl-cert-chain-prefix=CERT_CHAIN --spi-x509cert-lookup-<provider>-certificate-chain-length=10",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:24 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc11/23.3.0.23.09/ojdbc11-23.3.0.23.09.jar /opt/keycloak/providers/ojdbc11.jar ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/oracle/database/nls/orai18n/23.3.0.23.09/orai18n-23.3.0.23.09.jar /opt/keycloak/providers/orai18n.jar Setting the build parameter for the database: ENV KC_DB=oracle Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:24 ADD --chown=keycloak:keycloak --chmod=644 https://repo1.maven.org/maven2/com/microsoft/sqlserver/mssql-jdbc/12.4.2.jre11/mssql-jdbc-12.4.2.jre11.jar /opt/keycloak/providers/mssql-jdbc.jar Setting the build parameter for the database: ENV KC_DB=mssql Add all other build parameters needed, for example enable health and metrics: ENV KC_HEALTH_ENABLED=true ENV KC_METRICS_ENABLED=true To be able to use the image with the Red Hat build of Keycloak Operator, it needs to be optimized, which requires Red Hat build of Keycloak's build step: RUN /opt/keycloak/bin/kc.sh build",
"bin/kc.[sh|bat] start --db postgres --db-url-host mypostgres --db-username myuser --db-password change_me",
"bin/kc.[sh|bat] start --db postgres --db-url jdbc:postgresql://mypostgres/mydatabase",
"bin/kc.[sh|bat] start --db postgres --db-driver=my.Driver",
"show server_encoding;",
"create database keycloak with encoding 'UTF8';",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:24 ADD --chmod=0666 https://github.com/awslabs/aws-advanced-jdbc-wrapper/releases/download/2.3.1/aws-advanced-jdbc-wrapper-2.3.1.jar /opt/keycloak/providers/aws-advanced-jdbc-wrapper.jar",
"bin/kc.[sh|bat] start --spi-dblock-jpa-lock-wait-timeout 900",
"bin/kc.[sh|bat] build --db=<vendor> --transaction-xa-enabled=false",
"bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-strategy=manual",
"bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-initialize-empty=false",
"bin/kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-export=<path>/<file.sql>",
"bin/kc.[sh|bat] build --cache=ispn",
"bin/kc.[sh|bat] build --cache-config-file=my-cache-file.xml",
"bin/kc.[sh|bat] build --cache-stack=<stack>",
"bin/kc.[sh|bat] build --cache-stack=<ec2|google|azure>",
"<jgroups> <stack name=\"my-encrypt-udp\" extends=\"udp\"> <SSL_KEY_EXCHANGE keystore_name=\"server.jks\" keystore_password=\"password\" stack.combine=\"INSERT_AFTER\" stack.position=\"VERIFY_SUSPECT2\"/> <ASYM_ENCRYPT asym_keylength=\"2048\" asym_algorithm=\"RSA\" change_key_on_coord_leave = \"false\" change_key_on_leave = \"false\" use_external_key_exchange = \"true\" stack.combine=\"INSERT_BEFORE\" stack.position=\"pbcast.NAKACK2\"/> </stack> </jgroups> <cache-container name=\"keycloak\"> <transport lock-timeout=\"60000\" stack=\"my-encrypt-udp\"/> </cache-container>",
"<cache-container name=\"keycloak\" statistics=\"true\"> </cache-container>",
"<local-cache name=\"realms\" statistics=\"true\"> </local-cache>",
"bin/kc.[sh|bat] start --spi-connections-http-client-default-<configurationoption>=<value>",
"HTTPS_PROXY=https://www-proxy.acme.com:8080 NO_PROXY=google.com,login.facebook.com",
".*\\.(google|googleapis)\\.com",
"bin/kc.[sh|bat] start --spi-connections-http-client-default-proxy-mappings=\"'*\\\\\\.(google|googleapis)\\\\\\.com;http://www-proxy.acme.com:8080'\"",
".*\\.(google|googleapis)\\.com;http://proxyuser:[email protected]:8080",
"All requests to Google APIs use http://www-proxy.acme.com:8080 as proxy .*\\.(google|googleapis)\\.com;http://www-proxy.acme.com:8080 All requests to internal systems use no proxy .*\\.acme\\.com;NO_PROXY All other requests use http://fallback:8080 as proxy .*;http://fallback:8080",
"bin/kc.[sh|bat] start --truststore-paths=/opt/truststore/myTrustStore.pfx,/opt/other-truststore/myOtherTrustStore.pem",
"bin/kc.[sh|bat] build --features=\"<name>[,<name>]\"",
"bin/kc.[sh|bat] build --features=\"docker,token-exchange\"",
"bin/kc.[sh|bat] build --features=\"preview\"",
"bin/kc.[sh|bat] build --features-disabled=\"<name>[,<name>]\"",
"bin/kc.[sh|bat] build --features-disabled=\"impersonation\"",
"spi-<spi-id>-<provider-id>-<property>=<value>",
"spi-connections-http-client-default-connection-pool-size=10",
"bin/kc.[sh|bat] start --spi-connections-http-client-default-connection-pool-size=10",
"bin/kc.[sh|bat] build --spi-email-template-provider=mycustomprovider",
"bin/kc.[sh|bat] build --spi-email-template-mycustomprovider-enabled=true",
"bin/kc.[sh|bat] start --log-level=<root-level>",
"bin/kc.[sh|bat] start --log-level=\"<root-level>,<org.category1>:<org.category1-level>\"",
"bin/kc.[sh|bat] start --log-level=\"INFO,org.hibernate:debug,org.hibernate.hql.internal.ast:info\"",
"bin/kc.[sh|bat] start --log=\"<handler1>,<handler2>\"",
"bin/kc.[sh|bat] start --log-console-format=\"'<format>'\"",
"bin/kc.[sh|bat] start --log-console-format=\"'%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p [%c{3.}] (%t) %s%e%n'\"",
"bin/kc.[sh|bat] start --log-console-output=json",
"{\"timestamp\":\"2022-02-25T10:31:32.452+01:00\",\"sequence\":8442,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"io.quarkus\",\"level\":\"INFO\",\"message\":\"Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.253s. Listening on: http://0.0.0.0:8080\",\"threadName\":\"main\",\"threadId\":1,\"mdc\":{},\"ndc\":\"\",\"hostName\":\"host-name\",\"processName\":\"QuarkusEntryPoint\",\"processId\":36946}",
"bin/kc.[sh|bat] start --log-console-output=default",
"2022-03-02 10:36:50,603 INFO [io.quarkus] (main) Keycloak 18.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 3.615s. Listening on: http://0.0.0.0:8080",
"bin/kc.[sh|bat] start --log-console-color=<false|true>",
"bin/kc.[sh|bat] start --log=\"console,file\"",
"bin/kc.[sh|bat] start --log=\"console,file\" --log-file=<path-to>/<your-file.log>",
"bin/kc.[sh|bat] start --log-file-format=\"<pattern>\"",
"fips-mode-setup --check",
"fips-mode-setup --enable",
"keytool -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword -keystore USDKEYCLOAK_HOME/conf/server.keystore -alias localhost -dname CN=localhost -keypass passwordpassword",
"securerandom.strongAlgorithms=PKCS11:SunPKCS11-NSS-FIPS",
"keytool -keystore USDKEYCLOAK_HOME/conf/server.keystore -storetype bcfks -providername BCFIPS -providerclass org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -provider org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider -providerpath USDKEYCLOAK_HOME/providers/bc-fips-*.jar -alias localhost -genkeypair -sigalg SHA512withRSA -keyalg RSA -storepass passwordpassword -dname CN=localhost -keypass passwordpassword -J-Djava.security.properties=/tmp/kc.keystore-create.java.security",
"bin/kc.[sh|bat] start --features=fips --hostname=localhost --https-key-store-password=passwordpassword --log-level=INFO,org.keycloak.common.crypto:TRACE,org.keycloak.crypto:TRACE",
"KC(BCFIPS version 1.000203 Approved Mode, FIPS-JVM: enabled) version 1.0 - class org.keycloak.crypto.fips.KeycloakFipsSecurityProvider,",
"--spi-password-hashing-pbkdf2-sha256-max-padding-length=14",
"fips.provider.7=XMLDSig",
"-Djava.security.properties=/location/to/your/file/kc.java.security",
"cp USDKEYCLOAK_HOME/providers/bc-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/ cp USDKEYCLOAK_HOME/providers/bctls-fips-*.jar USDKEYCLOAK_HOME/bin/client/lib/",
"echo \"keystore.type=bcfks fips.keystore.type=bcfks\" > /tmp/kcadm.java.security export KC_OPTS=\"-Djava.security.properties=/tmp/kcadm.java.security\"",
"FROM registry.redhat.io/rhbk/keycloak-rhel9:24 as builder ADD files /tmp/files/ WORKDIR /opt/keycloak RUN cp /tmp/files/*.jar /opt/keycloak/providers/ RUN cp /tmp/files/keycloak-fips.keystore.* /opt/keycloak/conf/server.keystore RUN cp /tmp/files/kc.java.security /opt/keycloak/conf/ RUN /opt/keycloak/bin/kc.sh build --features=fips --fips-mode=strict FROM registry.redhat.io/rhbk/keycloak-rhel9:24 COPY --from=builder /opt/keycloak/ /opt/keycloak/ ENTRYPOINT [\"/opt/keycloak/bin/kc.sh\"]",
"{ \"status\": \"UP\", \"checks\": [] }",
"{ \"status\": \"UP\", \"checks\": [ { \"name\": \"Keycloak database connections health check\", \"status\": \"UP\" } ] }",
"bin/kc.[sh|bat] build --health-enabled=true",
"curl --head -fsS http://localhost:8080/health/ready",
"bin/kc.[sh|bat] build --health-enabled=true --metrics-enabled=true",
"bin/kc.[sh|bat] start --metrics-enabled=true",
"HELP base_gc_total Displays the total number of collections that have occurred. This attribute lists -1 if the collection count is undefined for this collector. TYPE base_gc_total counter base_gc_total{name=\"G1 Young Generation\",} 14.0 HELP jvm_memory_usage_after_gc_percent The percentage of long-lived heap pool used after the last GC event, in the range [0..1] TYPE jvm_memory_usage_after_gc_percent gauge jvm_memory_usage_after_gc_percent{area=\"heap\",pool=\"long-lived\",} 0.0 HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset TYPE jvm_threads_peak_threads gauge jvm_threads_peak_threads 113.0 HELP agroal_active_count Number of active connections. These connections are in use and not available to be acquired. TYPE agroal_active_count gauge agroal_active_count{datasource=\"default\",} 0.0 HELP base_memory_maxHeap_bytes Displays the maximum amount of memory, in bytes, that can be used for memory management. TYPE base_memory_maxHeap_bytes gauge base_memory_maxHeap_bytes 1.6781410304E10 HELP process_start_time_seconds Start time of the process since unix epoch. TYPE process_start_time_seconds gauge process_start_time_seconds 1.675188449054E9 HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time TYPE system_load_average_1m gauge system_load_average_1m 4.005859375",
"bin/kc.[sh|bat] export --help",
"bin/kc.[sh|bat] export --dir <dir>",
"bin/kc.[sh|bat] export --dir <dir> --users different_files --users-per-file 100",
"bin/kc.[sh|bat] export --file <file>",
"bin/kc.[sh|bat] export [--dir|--file] <path> --realm my-realm",
"bin/kc.[sh|bat] import --help",
"bin/kc.[sh|bat] import --dir <dir>",
"bin/kc.[sh|bat] import --dir <dir> --override false",
"bin/kc.[sh|bat] import --file <file>",
"bin/kc.[sh|bat] start --import-realm",
"{ \"realm\": \"USD{MY_REALM_NAME}\", \"enabled\": true, }",
"bin/kc.[sh|bat] build --vault=file",
"bin/kc.[sh|bat] build --vault=keystore",
"bin/kc.[sh|bat] start --vault-dir=/my/path",
"USD{vault.<realmname>_<secretname>}",
"keytool -importpass -alias <realm-name>_<alias> -keystore keystore.p12 -storepass keystorepassword",
"bin/kc.[sh|bat] start --vault-file=/path/to/keystore.p12 --vault-pass=<value> --vault-type=<value>",
"sso__realm_ldap__credential"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html-single/server_guide//configuration~ |
Chapter 5. Using Operator Lifecycle Manager in disconnected environments | Chapter 5. Using Operator Lifecycle Manager in disconnected environments For OpenShift Container Platform clusters in disconnected environments, Operator Lifecycle Manager (OLM) by default cannot access the Red Hat-provided OperatorHub sources hosted on remote registries because those remote sources require full internet connectivity. However, as a cluster administrator you can still enable your cluster to use OLM in a disconnected environment if you have a workstation that has full internet access. The workstation, which requires full internet access to pull the remote OperatorHub content, is used to prepare local mirrors of the remote sources, and push the content to a mirror registry. The mirror registry can be located on a bastion host, which requires connectivity to both your workstation and the disconnected cluster, or a completely disconnected, or airgapped , host, which requires removable media to physically move the mirrored content to the disconnected environment. This guide describes the following process that is required to enable OLM in disconnected environments: Disable the default remote OperatorHub sources for OLM. Use a workstation with full internet access to create and push local mirrors of the OperatorHub content to a mirror registry. Configure OLM to install and manage Operators from local sources on the mirror registry instead of the default remote sources. After enabling OLM in a disconnected environment, you can continue to use your unrestricted workstation to keep your local OperatorHub sources updated as newer versions of Operators are released. Important While OLM can manage Operators from local sources, the ability for a given Operator to run successfully in a disconnected environment still depends on the Operator itself meeting the following criteria: List any related images, or other container images that the Operator might require to perform their functions, in the relatedImages parameter of its ClusterServiceVersion (CSV) object. Reference all specified images by a digest (SHA) and not by a tag. You can search software on the Red Hat Ecosystem Catalog for a list of Red Hat Operators that support running in disconnected mode by filtering with the following selections: Type Containerized application Deployment method Operator Infrastructure features Disconnected Additional resources Red Hat-provided Operator catalogs Enabling your Operator for restricted network environments 5.1. Prerequisites You are logged in to your OpenShift Container Platform cluster as a user with cluster-admin privileges. If you are using OLM in a disconnected environment on IBM Z(R), you must have at least 12 GB allocated to the directory where you place your registry. 5.2. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. You can then configure OperatorHub to use local catalog sources. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 5.3. Mirroring an Operator catalog For instructions about mirroring Operator catalogs for use with disconnected clusters, see Mirroring Operator catalogs for use with disconnected clusters . Important As of OpenShift Container Platform 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OpenShift Container Platform 4.6 through 4.10 released in the deprecated SQLite database format. The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format. Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune , do not work with the file-based catalog format. For more information about working with file-based catalogs, see Operator Framework packaging format , Managing custom catalogs , and Mirroring images for a disconnected installation using the oc-mirror plugin . 5.4. Adding a catalog source to a cluster Adding a catalog source to an OpenShift Container Platform cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface. Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. Prerequisites You built and pushed an index image to a registry. You have access to the cluster as a user with the cluster-admin role. Procedure Create a CatalogSource object that references your index image. If you used the oc adm catalog mirror command to mirror your catalog to a target registry, you can use the generated catalogSource.yaml file in your manifests directory as a starting point. Modify the following to your specifications and save it as a catalogSource.yaml file: apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.17 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m 1 If you mirrored content to local files before uploading to a registry, remove any backslash ( / ) characters from the metadata.name field to avoid an "invalid resource name" error when you create the object. 2 If you want the catalog source to be available globally to users in all namespaces, specify the openshift-marketplace namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace. 3 Specify the value of legacy or restricted . If the field is not set, the default value is legacy . In a future OpenShift Container Platform release, it is planned that the default value will be restricted . If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy . 4 Specify your index image. If you specify a tag after the image name, for example :v4.17 , the catalog source pod uses an image pull policy of Always , meaning the pod always pulls the image prior to starting the container. If you specify a digest, for example @sha256:<id> , the image pull policy is IfNotPresent , meaning the pod pulls the image only if it does not already exist on the node. 5 Specify your name or an organization name publishing the catalog. 6 Catalog sources can automatically check for new versions to keep up to date. Use the file to create the CatalogSource object: USD oc apply -f catalogSource.yaml Verify the following resources are created successfully. Check the pods: USD oc get pods -n openshift-marketplace Example output NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h Check the catalog source: USD oc get catalogsource -n openshift-marketplace Example output NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s Check the package manifest: USD oc get packagemanifest -n openshift-marketplace Example output NAME CATALOG AGE jaeger-product My Operator Catalog 93s You can now install the Operators from the OperatorHub page on your OpenShift Container Platform web console. Additional resources Accessing images for Operators from private registries Image template for custom catalog sources Image pull policy 5.5. steps Updating installed Operators | [
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: my-operator-catalog 1 namespace: openshift-marketplace 2 spec: sourceType: grpc grpcPodConfig: securityContextConfig: <security_mode> 3 image: <registry>/<namespace>/redhat-operator-index:v4.17 4 displayName: My Operator Catalog publisher: <publisher_name> 5 updateStrategy: registryPoll: 6 interval: 30m",
"oc apply -f catalogSource.yaml",
"oc get pods -n openshift-marketplace",
"NAME READY STATUS RESTARTS AGE my-operator-catalog-6njx6 1/1 Running 0 28s marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h",
"oc get catalogsource -n openshift-marketplace",
"NAME DISPLAY TYPE PUBLISHER AGE my-operator-catalog My Operator Catalog grpc 5s",
"oc get packagemanifest -n openshift-marketplace",
"NAME CATALOG AGE jaeger-product My Operator Catalog 93s"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/disconnected_environments/olm-restricted-networks |
Part VII. Patching and upgrading Red Hat Decision Manager | Part VII. Patching and upgrading Red Hat Decision Manager Starting with release 7.13, the distribution files for Red Hat Decision Manager are replaced with Red Hat Process Automation Manager files. You can apply updates to Red Hat Decision Manager release 7.12 and earlier or Red Hat Process Automation Manager 7.13 as they become available in the Red Hat Customer Portal to keep your distribution current with the latest enhancements and fixes. Red Hat provides update tools and product notifications for new product releases so you can more readily apply helpful updates to your installation environment. Prerequisites You have a Red Hat Customer Portal account. Red Hat Decision Manager or Red Hat Process Automation Manager is installed. For installation options, see Planning a Red Hat Decision Manager installation . Note If you are using Red Hat Decision Manager 7.10 and you want to upgrade to Red Hat Process Automation Manager 7.13, see Patching and upgrading Red Hat Decision Manager 7.10 . | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/assembly-patching-and-upgrading |
A.4. Explanation of settings in the New Virtual Disk and Edit Virtual Disk windows | A.4. Explanation of settings in the New Virtual Disk and Edit Virtual Disk windows Note The following tables do not include information on whether a power cycle is required because that information is not applicable to these scenarios. Table A.21. New Virtual Disk and Edit Virtual Disk settings: Image Field Name Description Size(GB) The size of the new virtual disk in GB. Alias The name of the virtual disk, limited to 40 characters. Description A description of the virtual disk. This field is recommended but not mandatory. Interface The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but you can install them from the virtio-win ISO image. IDE and SATA devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. Data Center The data center in which the virtual disk will be available. Storage Domain The storage domain in which the virtual disk will be stored. The drop-down list shows all storage domains available in the given data center, and also shows the total space and currently available space in the storage domain. Allocation Policy The provisioning policy for the new virtual disk. Preallocated allocates the entire size of the disk on the storage domain at the time the virtual disk is created. The virtual size and the actual size of a preallocated disk are the same. Preallocated virtual disks take more time to create than thin provisioned virtual disks, but have better read and write performance. Preallocated virtual disks are recommended for servers and other I/O intensive virtual machines. If a virtual machine is able to write more than 1 GB every four seconds, use preallocated disks where possible. Thin Provision allocates 1 GB at the time the virtual disk is created and sets a maximum limit on the size to which the disk can grow. The virtual size of the disk is the maximum limit; the actual size of the disk is the space that has been allocated so far. Thin provisioned disks are faster to create than preallocated disks and allow for storage over-commitment. Thin provisioned virtual disks are recommended for desktops. Disk Profile The disk profile assigned to the virtual disk. Disk profiles define the maximum amount of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are defined on the storage domain level based on storage quality of service entries created for data centers. Activate Disk(s) Activate the virtual disk immediately after creation. This option is not available when creating a floating disk. Wipe After Delete Allows you to enable enhanced security for deletion of sensitive material when the virtual disk is deleted. Bootable Enables the bootable flag on the virtual disk. Shareable Attaches the virtual disk to more than one virtual machine at a time. Read Only Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. This option is not available when creating a floating disk. Enable Discard Allows you to shrink a thin provisioned disk while the virtual machine is up. For block storage, the underlying storage device must support discard calls, and the option cannot be used with Wipe After Delete unless the underlying storage supports the discard_zeroes_data property. For file storage, the underlying file system and the block device must support discard calls. If all requirements are met, SCSI UNMAP commands issued from guest virtual machines is passed on by QEMU to the underlying storage to free up the unused space. The Direct LUN settings can be displayed in either Targets > LUNs or LUNs > Targets . Targets > LUNs sorts available LUNs according to the host on which they are discovered, whereas LUNs > Targets displays a single list of LUNs. Table A.22. New Virtual Disk and Edit Virtual Disk settings: Direct LUN Field Name Description Alias The name of the virtual disk, limited to 40 characters. Description A description of the virtual disk. This field is recommended but not mandatory. By default the last 4 characters of the LUN ID is inserted into the field. The default behavior can be configured by setting the PopulateDirectLUNDiskDescriptionWithLUNId configuration key to the appropriate value using the engine-config command. The configuration key can be set to -1 for the full LUN ID to be used, or 0 for this feature to be ignored. A positive integer populates the description with the corresponding number of characters of the LUN ID. Interface The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and later include these drivers. Windows does not include these drivers, but you can install them from the virtio-win ISO image. IDE and SATA devices do not require special drivers. The interface type can be updated after stopping all virtual machines that the disk is attached to. Data Center The data center in which the virtual disk will be available. Host The host on which the LUN will be mounted. You can select any host in the data center. Storage Type The type of external LUN to add. You can select from either iSCSI or Fibre Channel . Discover Targets This section can be expanded when you are using iSCSI external LUNs and Targets > LUNs is selected. Address - The host name or IP address of the target server. Port - The port by which to attempt a connection to the target server. The default port is 3260. User Authentication - The iSCSI server requires User Authentication. The User Authentication field is visible when you are using iSCSI external LUNs. CHAP user name - The user name of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected. CHAP password - The password of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected. Activate Disk(s) Activate the virtual disk immediately after creation. This option is not available when creating a floating disk. Bootable Allows you to enable the bootable flag on the virtual disk. Shareable Allows you to attach the virtual disk to more than one virtual machine at a time. Read Only Allows you to set the disk as read-only. The same disk can be attached as read-only to one virtual machine, and as rewritable to another. This option is not available when creating a floating disk. Enable Discard Allows you to shrink a thin provisioned disk while the virtual machine is up. With this option enabled, SCSI UNMAP commands issued from guest virtual machines is passed on by QEMU to the underlying storage to free up the unused space. Enable SCSI Pass-Through Available when the Interface is set to VirtIO-SCSI . Selecting this check box enables passthrough of a physical SCSI device to the virtual disk. A VirtIO-SCSI interface with SCSI passthrough enabled automatically includes SCSI discard support. Read Only is not supported when this check box is selected. When this check box is not selected, the virtual disk uses an emulated SCSI device. Read Only is supported on emulated VirtIO-SCSI disks. Allow Privileged SCSI I/O Available when the Enable SCSI Pass-Through check box is selected. Selecting this check box enables unfiltered SCSI Generic I/O (SG_IO) access, allowing privileged SG_IO commands on the disk. This is required for persistent reservations. Using SCSI Reservation Available when the Enable SCSI Pass-Through and Allow Privileged SCSI I/O check boxes are selected. Selecting this check box disables migration for any virtual machine using this disk, to prevent virtual machines that are using SCSI reservation from losing access to the disk. Fill in the fields in the Discover Targets section and click Discover to discover the target server. You can then click the Login All button to list the available LUNs on the target server and, using the radio buttons to each LUN, select the LUN to add. Using LUNs directly as virtual machine hard disk images removes a layer of abstraction between your virtual machines and their data. The following considerations must be made when using a direct LUN as a virtual machine hard disk image: Live storage migration of direct LUN hard disk images is not supported. Direct LUN disks are not included in virtual machine exports. Direct LUN disks are not included in virtual machine snapshots. Important Mounting a journaled file system requires read-write access. Using the Read Only option is not appropriate for virtual disks that contain such file systems (e.g. EXT3 , EXT4 , or XFS ). | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/add_virtual_disk_dialogue_entries |
Installing on AWS | Installing on AWS OpenShift Container Platform 4.15 Installing OpenShift Container Platform on Amazon Web Services Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_aws/index |
Chapter 1. What is GitOps? | Chapter 1. What is GitOps? GitOps is a declarative way to implement continuous deployment for cloud native applications. You can use GitOps to create repeatable processes for managing OpenShift Container Platform clusters and applications across multi-cluster Kubernetes environments. GitOps handles and automates complex deployments at a fast pace, saving time during deployment and release cycles. The GitOps workflow pushes an application through development, testing, staging, and production. GitOps either deploys a new application or updates an existing one, so you only need to update the repository; GitOps automates everything else. GitOps is a set of practices that use Git pull requests to manage infrastructure and application configurations. In GitOps, the Git repository is the only source of truth for system and application configuration. This Git repository contains a declarative description of the infrastructure you need in your specified environment and contains an automated process to make your environment match the described state. Also, it contains the entire state of the system so that the trail of changes to the system state are visible and auditable. By using GitOps, you resolve the issues of infrastructure and application configuration sprawl. GitOps defines infrastructure and application definitions as code. Then, it uses this code to manage multiple workspaces and clusters to simplify the creation of infrastructure and application configurations. By following the principles of the code, you can store the configuration of clusters and applications in Git repositories, and then follow the Git workflow to apply these repositories to your chosen clusters. You can apply the core principles of developing and maintaining software in a Git repository to the creation and management of your cluster and application configuration files. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.12/html/understanding_openshift_gitops/what-is-gitops |
Chapter 2. Subscription models for JBoss EAP on Microsoft Azure | Chapter 2. Subscription models for JBoss EAP on Microsoft Azure You can choose between two subscription models for deploying JBoss EAP on Azure: bring your own subscription (BYOS) and pay-as-you-go (PAYG). The following are the differences between the two offerings: BYOS If you already have an existing Red Hat subscription, you can use it for deploying JBoss EAP on Azure with the BYOS model. For more information, see About Red Hat Cloud Access . PAYG If you do not have an existing Red Hat subscription, you can choose to pay based on the number of computing resources you used by using the PAYG model. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_red_hat_jboss_enterprise_application_platform_in_microsoft_azure/subscription-models-for-server-on-microsoft-azure_default |
Chapter 2. Creating YAML rules | Chapter 2. Creating YAML rules Each analyzer rule is a set of instructions that are used to analyze source code and detect issues that are problematic for migration. The analyzer parses user-provided rules, applies them to applications' source code, and generates issues for matched rules. A collection of one or more rules forms a ruleset. Creating rulesets provides a way of organizing multiple rules that achieve a common goal. The analyzer CLI takes rulesets as input arguments. 2.1. YAML rule structure and syntax MTA rules are written in YAML. Each rule consists of metadata, conditions and actions, which instruct the analyzer to take specified actions when given conditions match. A YAML rule file in MTA contains one or more YAML rules. 2.1.1. Rule metadata Rule metadata contains general information about the rule. The structure of metadata is as follows: ruleId: "unique_id" 1 labels: 2 # key=value pair - "label1=val1" # valid label with value omitted - "label2" # valid label with empty value - "label3=" # subdomain prefixed key - "konveyor.io/label1=val1" effort: 1 3 category: mandatory 4 1 The ID must be unique within the ruleset to which the rule belongs. 2 See below for a description of the label format. 3 effort is an integer value that indicates the level of effort needed to fix this issue. 4 category describes the severity of the issue for migration. The value can be either mandatory , optional or potential . For a description of these categories, see Rule categories . 2.1.1.1. Rule labels Labels are key=val pairs specified for rules or rulesets as well as dependencies. For dependencies, a provider adds the labels to the dependencies when retrieving them. Labels on a ruleset are automatically inherited by all the rules that belong to it. Label format Labels are specified under the labels field as a list of strings in key=val format as follows: labels: - "key1=val1" - "key2=val2" The key of a label can be subdomain-prefixed: labels: - "konveyor.io/key1=val1" The value of a label can be empty: labels: - "konveyor.io/key=" The value of a label can be omitted. In that case, it is treated as an empty value: labels: - "konveyor.io/key" Reserved labels The analyzer defines some labels that have special meaning as follows: konveyor.io/source : Identifies the source technology to which a rule or a ruleset applies konveyor.io/target : Identifies the target technology to which a rule or a ruleset applies Label selector The analyzer CLI takes the --label-selector field as an option. It is a string expression that supports logical AND, OR and NOT operations. You can use it to filter-in or filter-out rules by their labels. Examples: To filter-in all rules that have a label with the key konveyor.io/source and value eap6 : --label-selector="konveyor.io/source=eap6" To filter-in all rules that have a label with the key konveyor.io/source and any value: --label-selector="konveyor.io/source" To perform logical AND operations on matches of multiple rules using the && operator: --label-selector="key1=val1 && key2" To perform logical OR operations on matches of multiple rules using the || operator: --label-selector="key1=val1 || key2" To perform a NOT operation to filter-out rules that have key1=val1 label set using the ! operator: --label-selector="!key1=val1" To group sub-expressions and control precedence using AND: --label-selector="(key1=val1 || key2=val2) && !val3" Dependency labels The analyzer engine adds labels to dependencies. These labels provide additional information about a dependency, such as its programming language and whether the dependency is open source or internal. Currently, the analyzer adds the following labels to dependencies: labels: - konveyor.io/dep-source=internal - konveyor.io/language=java Dependency label selector The analyzer CLI accepts the --dep-label-selector option, which allows filtering-in or filtering-out incidents generated from a dependency by their labels. For example, the analyzer adds a konveyor.io/dep-source label to dependencies with a value that indicates whether the dependency is a known open source dependency. To exclude incidents for all such open source dependencies, you can use --dep-label-selector as follows: konveyor-analyzer ... --dep-label-selector !konveyor.io/dep-source=open-source The Java provider in the analyzer can also add an exclude label to a list of packages. To exclude all such packages, you can use --dep-label-selector and the ! operator as follows: konveyor-analyzer ... --dep-label-selector !konveyor.io/exclude 2.1.1.2. Rule categories mandatory You must resolve the issue for a successful migration, otherwise, the resulting application is not expected to build or run successfully. An example of such an issue is proprietary APIs that are not supported on the target platform. optional If you do not resolve the issue, the application is expected to work, but the results might not be optimal. If you do not make the change at the time of migration, you need to put it on the schedule soon after your migration is completed. An example of such an issue is EJB 2.x code not upgraded to EJB 3. potential You need to examine the issue during the migration process, but there is not enough information to determine whether resolving the issue is mandatory for the migration to succeed. An example of such an issue is migrating a third-party proprietary type when there is no directly compatible type on the target platform. 2.1.1.3. Rule Actions Rules can include 2 types of actions: message and tag . Each rule includes one of them or both. Message actions A message action creates an issue with a message when the rule matches. The custom data exported by providers can also be used in the message. message: "helpful message about the issue" Example: - ruleID: test-rule when: <CONDITION> message: Test rule matched. Please resolve this migration issue. Optionally, a message can include hyperlinks to external URLs that provide relevant information about the issue or a quick fix. links: - url: "konveyor.io" title: "Short title for the link" A message can also be a template to include information about the match interpolated through custom variables on the rule. Tag actions A tag action instructs the analyzer to generate one or more tags for the application when a match is found. Each string in the tag field can be a comma-separated list of tags. Optionally, you can assign categories to tags. tag: - "tag1,tag2,tag3" - "Category=tag4,tag5" Example - ruleID: test-rule when: <CONDITION> tag: - Language=Golang - Env=production - Source Code A tag can be a string or a key=val pair, where the key is treated as a tag category in MTA. Any rule that has a tag action is referred to as a "tagging rule" in this document. Note that issues are not created for rules that contain only tag actions. 2.1.1.4. Rule conditions Each rule has a when block, which specifies a condition that needs to be met for MTA to perform a certain action. The when block contains one condition, but that condition can have multiple conditions nested under it. when: <condition> <nested-condition> MTA supports three types of conditions: provider , and , and or . 2.1.1.4.1. Provider conditions The Application Analyzer detects the programming languages, frameworks, and tools used to build an application, and it generates default rulesets for each supported provider using the Language Server Protocol (LSP) accordingly. Each supported provider has a ruleset defined by default and is run independently in a separate container. MTA supports multi-language source code analysis. Searching for a specific language in the source code is enabled using the provider condition. This condition defines a search query for a specific language provider. The provider condition also specifies which of the provider's "capabilities" to use for analyzing the code. The provider condition has the form <provider_name>.<capability> : when: <provider_name>.<capability> <input_fields> The analyzer currently supports the following provider conditions: builtin java go dotnet Important Support for providing a single report when analyzing multiple applications on the CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Provider rule conditions Provider name Providers that are fully supported and included in the product Java Providers that have rules already defined in the product .NET Providers that require custom rulesets for analysis Go Python Node.js 2.1.1.4.1.1. builtin provider builtin is an internal provider that can analyze various files and internal metadata generated by the engine. This provider has the following capabilities: file filecontent xml json hasTags file The file capability enables the provider to search for files in the source code that match a given pattern. when: builtin.file: pattern: "<regex_to_match_filenames>" filecontent The filecontent capability enables the provider to search for content that matches a given pattern. when: builtin.filecontent: filePattern: "<regex_to_match_filenames_to_scope_search>" pattern: "<regex_to_match_content_in_the_matching_files>" xml The xml capability enables the provider to query XPath expressions on a list of provided XML files. This capability takes 2 input parameters, xpath and filepaths . when: builtin.xml: xpath: "<xpath_expressions>" 1 filepaths: 2 - "/src/file1.xml" - "/src/file2.xml" 1 xpath must be a valid XPath expression. 2 filepaths is a list of files to apply the XPath query to. json The json capability enables the provider to query XPath expressions on a list of provided JSON files. Currently, json only takes XPath as input and performs the search on all JSON files in the codebase. when: builtin.json: xpath: "<xpath_expressions>" 1 1 xpath must be a valid XPath expression. hasTags The hasTags capability enables the provider to query application tags. It queries the internal data structure to check whether the application has the given tags. when: # when more than one tag is given, a logical AND is implied hasTags: 1 - "tag1" - "tag2" 1 When more than one tag is given, a logical AND is implied. 2.1.1.4.1.2. java provider The java provider analyzes Java source code. This provider has the following capabilities: referenced dependency referenced The referenced capability enables the provider to find references in the source code. This capability takes three input parameters: pattern , location , and annotated . when: java.referenced: pattern: "<pattern>" 1 location: "<location>" 2 annotated: "<annotated>" 3 1 A regular expression pattern to match, for example, org.kubernetes.* . 2 Specifies the exact location where the pattern needs to be matched, for example, IMPORT . 3 Checks for specific annotations and their elements, such as name and value in the Java code using a query. For example, the following query matches the Bean(url = "http://www.example.com") annotation in the method. annotated: pattern: org.framework.Bean elements: - name: url value: "http://www.example.com" The supported locations are the following: CONSTRUCTOR_CALL TYPE INHERITANCE METHOD_CALL ANNOTATION IMPLEMENTS_TYPE ENUM_CONSTANT RETURN_TYPE IMPORT VARIABLE_DECLARATION FIELD METHOD dependency The dependency capability enables the provider to find dependencies for a given application. MTA generates a list of the application's dependencies, and you can use this capability to query the list and check whether a certain dependency exists for the application within a given range of the dependency's versions. when: java.dependency: name: "<dependency_name>" 1 upperbound: "<version_string>" 2 lowerbound: "<version_string>" 3 1 Name of the dependency to search for. 2 Upper bound on the version of the dependency. 3 Lower bound on the version of the dependency. 2.1.1.4.1.3. go provider The go provider analyzes Go source code. This provider's capabilities are referenced and dependency . referenced The referenced capability enables the provider to find references in the source code. when: go.referenced: "<regex_to_find_reference>" dependency The dependency capability enables the provider to find dependencies for an application. when: go.dependency: name: "<dependency_name>" 1 upperbound: "<version_string>" 2 lowerbound: "<version_string>" 3 1 Name of the dependency to search for. 2 Upper bound on the version of the dependency. 3 Lower bound on the version of the dependency. 2.1.1.4.1.4. dotnet provider The dotnet is an external provider used to analyze .NET and C# source code. Currently, the provider supports the referenced capability. referenced The referenced capability enables the provider to find references in the source code. when: dotnet.referenced: pattern: "<pattern>" 1 namespace: "<namespace>" 2 1 pattern : A regex pattern to match the desired reference. For example, HttpNotFound. 2 namespace : Specifies the namespace to search within. For example, System.Web.Mvc. 2.1.1.4.2. Custom variables Provider conditions can have associated custom variables. You can use custom variables to capture relevant information from the matched line in the source code. The values of these variables are interpolated with data matched in the source code. These values can be used to generate detailed templated messages in a rule's action (see Message actions ). They can be added to a rule in the customVariables field: - ruleID: lang-ref-004 customVariables: - pattern: '([A-z]+)\.get\(\)' 1 name: VariableName 2 message: "Found generic call - {{ VariableName }}" 3 when: java.referenced: location: METHOD_CALL pattern: com.example.apps.GenericClass.get 1 pattern : A regular expression pattern that is matched on the source code line when a match is found. 2 name : The name of the variable that can be used in templates. 3 message : A template for a message using a custom variable. 2.1.1.5. Logical conditions The analyzer provides two basic logical conditions, and and or , which enable you to aggregate results of other conditions and create more complex queries. 2.1.1.5.1. and condition The and condition performs a logical AND operation on the results of an array of conditions. An and condition matches when all of its child conditions match. when: and: - <condition1> - <condition2> Example when: and: - java.dependency: name: junit.junit upperbound: 4.12.2 lowerbound: 4.4.0 - java.referenced: location: IMPORT pattern: junit.junit Conditions can also be nested within other conditions. Example when: and: - and: - go.referenced: "*CustomResourceDefinition*" - java.referenced: pattern: "*CustomResourceDefinition*" - go.referenced: "*CustomResourceDefinition*" 2.1.1.5.2. or condition The or condition performs a logical OR operation on the results of an array of conditions. An or condition matches when any of its child conditions matches. when: or: - <condition1> - <condition2> Example when: or: - java.dependency: name: junit.junit upperbound: 4.12.2 lowerbound: 4.4.0 - java.referenced: location: IMPORT pattern: junit.junit 2.1.2. Rulesets A set of rules forms a ruleset. MTA does not require every rule file to belong to a ruleset, but you can use rulesets to group multiple rules that achieve a common goal and to pass the rules to the rules engine. You can create a ruleset by placing one or more YAML rules in a directory and creating a ruleset.yaml file at the directory root. When you pass this directory as input to the MTA CLI using the --rules option, all rules in this directory are treated as a part of the ruleset defined by ruleset.yaml file. The ruleset.yaml file stores the metadata of the ruleset. name: "Name of the ruleset" 1 description: "Description of the ruleset" labels: 2 - key=val 1 The name must be unique within the provided rulesets. 2 Ruleset labels are inherited by all rules that belong to the ruleset. To execute any application analysis, run the following command. Replace <application_to_analyze> with your application, <output_dir> with the directory of your choice, and <custom_rule_dir> with the custom rulesets file. On initiation, the mta-cli tool determines the type of application and the corresponding provider needed for analysis. It then starts the provider in a container that has the required dependencies and tools. Finally, the provider uses the analyzer to execute a series of rulesets to analyze the source code. 2.2. Creating a basic YAML rule This section describes how to create a basic MTA YAML rule. This assumes that you already have MTA installed. See the MTA CLI Guide for installation instructions. 2.2.1. Creating a basic YAML rule template MTA YAML-based rules have the following basic structure: when(condition) message(message) tag(tags) Procedure In the /home/<USER>/ directory, create a file containing the basic syntax for YAML rules as follows: - category: mandatory description: | <DESCRIPTION TITLE> <DESCRIPTION TEXT> effort: <EFFORT> labels: - konveyor.io/source=<SOURCE_TECH> - konveyor.io/target=<TARGET_TECH> links: - url: <HYPERLINK> title: <HYPERLINK_TITLE> message: <MESSAGE> tag: - <TAG1> - <TAG2> ruleID: <RULE_ID> when: <CONDITIONS> 2.2.2. Creating a basic YAML ruleset template If you want to group multiple similar rules, you can create a ruleset for them by placing their files in a directory and creating a ruleset.yaml file at the directory's root. When you pass this directory as input to the MTA CLI using the --rules option, MTA treats all the files in the directory as belonging to the ruleset defined in the ruleset.yaml file. Procedure Create a template for ruleset.yaml files if you want to pass the entire directory using the --rules option: name: <RULESET_NAME> 1 description: <RULESET_DESCRIPTION> labels: 2 - key=val 1 The name must be unique within the provided rulesets. 2 Ruleset labels are inherited by all rules that belong to the ruleset. 2.2.3. Creating a YAML rule Each rule file contains one or more YAML rules. Every rule comprises metadata, conditions and actions. Procedure Create a when condition. The when condition of a YAML rule can be provider , and or or . Create a provider condition The provider condition is used to define a search query for a specific language provider and to invoke a certain capability of the provider. The condition's general format is <provider_name>.<capability> . The condition also has inner fields to specify details of the search. The way you create a provider condition and its inner fields depends on which provider you use and which capability you invoke. The table below lists the available providers and their capabilities. Select a provider and its capability that suit the purpose of the rule you want to create. This part of the condition does not contain any of the condition's fields yet. Provider Capability Description java referenced Finds references of a pattern, including annotations, with an optional code location for detailed searches dependency Checks whether the application has a given dependency builtin xml Searches XML files using XPath queries json Searches JSON files using JSONPath queries filecontent Searches content in regular files using RegEx patterns file Finds files with names matching a given pattern hasTags Checks whether a tag is created for the application through a tagging rule go referenced Finds references of a pattern dependency Checks whether the application has a given dependency The example below shows a java provider condition that uses the referenced capability. Example when: java.referenced: Add suitable fields to the provider condition. The table below lists all available providers, their capabilities, and their fields. Select the fields that belong to the provider and capability that you have chosen. Note that some fields are mandatory. Provider Capability Field Required? Description java referenced pattern Yes RegEx pattern location No Source code location; see below for a list of all supported search locations annotated No Annotations and their elements (name and value) dependency name Yes Name of the dependency nameregex No RegEx pattern to match the name upperbound No Matches version numbers lower than or equal to lowerbound No Matches version numbers greater than or equal to builtin xml xpath Yes XPath query namespaces No A map to scope down query to namespaces filepaths No Optional list of files to scope down search json xpath Yes XPath query filepaths No Optional list of files to scope down search filecontent pattern Yes RegEx pattern to match in content filePattern No Only searches in files with names matching this pattern file pattern Yes Finds files with names matching this pattern hasTags This is an inline list of string tags. See Tag Actions in Rule Actions for details on tag format. go referenced pattern Yes RegEx pattern dependency name Yes Name of the dependency nameregex No RegEx pattern to match the name upperbound No Matches version numbers lower than or equal to lowerbound No Matches version numbers greater than or equal to The following search locations can be used to scope down java searches: CONSTRUCTOR_CALL TYPE INHERITANCE METHOD_CALL ANNOTATION IMPLEMENTS_TYPE ENUM_CONSTANT RETURN_TYPE IMPORT VARIABLE_DECLARATION The example below shows the when condition of a rule that searches for references of a package. Example when: java.referenced: location: PACKAGE pattern: org.jboss.* Create an AND or OR condition An and condition matches when all of its child conditions match. Create an and condition as follows: when: and: - java.dependency: name: junit.junit upperbound: 4.12.2 lowerbound: 4.4.0 - java.referenced: location: IMPORT pattern: junit.junit An or condition matches when any of its child conditions match. Create an or condition as follows: when: or: - java.dependency: name: junit.junit upperbound: 4.12.2 lowerbound: 4.4.0 - java.referenced: location: IMPORT pattern: junit.junit 2.2.4. Running an analysis using a custom YAML rule To run an analysis, use the --rules option in the CLI. Procedure To use the rules in a single rule file, /home/<USER>/rule.yaml , run the following command: mta-cli analyze --input /home/<USER>/data/ --output /home/<USER>/output/ --rules /home/<USER>/rule.yaml where: /home/<USER>/data/ - the directory of the source code or binary /home/<USER>/output/ - the directory for reports (HTML and YAML) To use multiple rule files, you need to place them in a directory and to add a ruleset.yaml file. Then the directory is treated as a ruleset , and you can pass it as input to the --rules option. Note that if you wish to use the --target or --source option in the CLI, the engine will only select rules that match the label for that target. Therefore, make sure that you have added target or source labels on your rules. See Reserved labels for more details. 2.3. Creating your first YAML rule This section guides you through the process of creating and testing your first MTA YAML-based rule. This assumes that you have already installed MTA. See Installing and running the CLI in the CLI Guide for installation instructions. In this example, you will create a rule to discover instances where an application defines a jboss-web.xml file containing a <class-loading> element and to provide a link to the documentation that describes how to migrate the code. 2.3.1. Creating a YAML file for the rule Create a YAML file for your first rule. 2.3.2. Creating data to test the rule Create jboss-web.xml and pom.xml files in a directory: In the jboss-web.xml file you created, paste the following content: In the pom.xml file you created, paste the following content: 2.3.3. Creating the rule MTA YAML-based rules use the following rule pattern: Procedure In the rule.yaml file you created, paste the following contents: 1 Unique ID for your rule. For example, jboss5-web-class-loading . 2 Text description of the rule. 3 Complete the when block specifying one or more conditions: Use the builtin provider's XML capability because this rule checks for a match in an XML file. To match on the class-loading element that is a child of jboss-web , use the XPath expression jboss-web/web-loading as an XML query. In this case, you need just one condition: 4 Helpful message explaining the migration issue. The message is generated in the report when the rule matches. For example: 5 List of string labels for the rule. 6 Number of expected story points to fix this issue. 7 One or more hyperlinks pointing to documentation around the migration issues that you find. The rule is now complete and looks similar to the following: 2.3.4. Installing the rule Procedure Point the CLI to the rule file you created : 2.3.5. Testing the rule Procedure To test the rule, point the input to the test data you created and pass the rule using the rules option in MTA CLI: 2.3.6. Reviewing the report Review the report to be sure that it provides the expected results. Procedure Once the analysis is complete, the command outputs the path to the HTML report: Open /home/<USER_NAME>/output/static-report/index.html in a web browser. Navigate to the Issues tab in the left menu. Verify that the rule is executed: In the Issues table, type JBoss XML in the search bar. Verify that the issue with the title Find class loading element in JBoss XML file is present in the table. Click the jboss-web.xml link to open the affected file. | [
"ruleId: \"unique_id\" 1 labels: 2 # key=value pair - \"label1=val1\" # valid label with value omitted - \"label2\" # valid label with empty value - \"label3=\" # subdomain prefixed key - \"konveyor.io/label1=val1\" effort: 1 3 category: mandatory 4",
"labels: - \"key1=val1\" - \"key2=val2\"",
"labels: - \"konveyor.io/key1=val1\"",
"labels: - \"konveyor.io/key=\"",
"labels: - \"konveyor.io/key\"",
"labels: - konveyor.io/dep-source=internal - konveyor.io/language=java",
"- ruleID: test-rule when: <CONDITION> message: Test rule matched. Please resolve this migration issue.",
"links: - url: \"konveyor.io\" title: \"Short title for the link\"",
"tag: - \"tag1,tag2,tag3\" - \"Category=tag4,tag5\"",
"- ruleID: test-rule when: <CONDITION> tag: - Language=Golang - Env=production - Source Code",
"when: <condition> <nested-condition>",
"when: <provider_name>.<capability> <input_fields>",
"when: builtin.file: pattern: \"<regex_to_match_filenames>\"",
"when: builtin.filecontent: filePattern: \"<regex_to_match_filenames_to_scope_search>\" pattern: \"<regex_to_match_content_in_the_matching_files>\"",
"when: builtin.xml: xpath: \"<xpath_expressions>\" 1 filepaths: 2 - \"/src/file1.xml\" - \"/src/file2.xml\"",
"when: builtin.json: xpath: \"<xpath_expressions>\" 1",
"when: # when more than one tag is given, a logical AND is implied hasTags: 1 - \"tag1\" - \"tag2\"",
"when: java.referenced: pattern: \"<pattern>\" 1 location: \"<location>\" 2 annotated: \"<annotated>\" 3",
"annotated: pattern: org.framework.Bean elements: - name: url value: \"http://www.example.com\"",
"when: java.dependency: name: \"<dependency_name>\" 1 upperbound: \"<version_string>\" 2 lowerbound: \"<version_string>\" 3",
"when: go.referenced: \"<regex_to_find_reference>\"",
"when: go.dependency: name: \"<dependency_name>\" 1 upperbound: \"<version_string>\" 2 lowerbound: \"<version_string>\" 3",
"when: dotnet.referenced: pattern: \"<pattern>\" 1 namespace: \"<namespace>\" 2",
"- ruleID: lang-ref-004 customVariables: - pattern: '([A-z]+)\\.get\\(\\)' 1 name: VariableName 2 message: \"Found generic call - {{ VariableName }}\" 3 when: java.referenced: location: METHOD_CALL pattern: com.example.apps.GenericClass.get",
"when: and: - <condition1> - <condition2>",
"when: and: - java.dependency: name: junit.junit upperbound: 4.12.2 lowerbound: 4.4.0 - java.referenced: location: IMPORT pattern: junit.junit",
"when: and: - and: - go.referenced: \"*CustomResourceDefinition*\" - java.referenced: pattern: \"*CustomResourceDefinition*\" - go.referenced: \"*CustomResourceDefinition*\"",
"when: or: - <condition1> - <condition2>",
"when: or: - java.dependency: name: junit.junit upperbound: 4.12.2 lowerbound: 4.4.0 - java.referenced: location: IMPORT pattern: junit.junit",
"name: \"Name of the ruleset\" 1 description: \"Description of the ruleset\" labels: 2 - key=val",
"mta-cli analyze --input=<application_to_analyze> --output=<output_dir> --rules=<custom_rule_dir> --enable-default-rulesets=false",
"when(condition) message(message) tag(tags)",
"- category: mandatory description: | <DESCRIPTION TITLE> <DESCRIPTION TEXT> effort: <EFFORT> labels: - konveyor.io/source=<SOURCE_TECH> - konveyor.io/target=<TARGET_TECH> links: - url: <HYPERLINK> title: <HYPERLINK_TITLE> message: <MESSAGE> tag: - <TAG1> - <TAG2> ruleID: <RULE_ID> when: <CONDITIONS>",
"name: <RULESET_NAME> 1 description: <RULESET_DESCRIPTION> labels: 2 - key=val",
"when: java.referenced:",
"when: java.referenced: location: PACKAGE pattern: org.jboss.*",
"when: and: - java.dependency: name: junit.junit upperbound: 4.12.2 lowerbound: 4.4.0 - java.referenced: location: IMPORT pattern: junit.junit",
"when: or: - java.dependency: name: junit.junit upperbound: 4.12.2 lowerbound: 4.4.0 - java.referenced: location: IMPORT pattern: junit.junit",
"mta-cli analyze --input /home/<USER>/data/ --output /home/<USER>/output/ --rules /home/<USER>/rule.yaml",
"mkdir /home/<USER>/rule.yaml",
"mkdir /home/<USER>/data/ touch /home/<USER>/data/jboss-web.xml touch /home/<USER>/data/pom.xml",
"<!DOCTYPE jboss-web PUBLIC \"-//JBoss//DTD Web Application 4.2//EN\" \"http://www.jboss.org/j2ee/dtd/jboss-web_4_2.dtd\"> <jboss-web> <class-loading java2ClassLoadingCompliance=\"false\"> <loader-repository> seam.jboss.org:loader=@projectName@ <loader-repository-config>java2ParentDelegation=false</loader-repository-config> </loader-repository> </class-loading> </jboss-web>",
"<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>test</groupId> <artifactId>test</artifactId> <version>1.1.0-SNAPSHOT</version> <properties> <maven.compiler.source>1.7</maven.compiler.source> <maven.compiler.target>1.7</maven.compiler.target> </properties> <dependencies> </dependencies> </project>",
"when(condition) perform(action)",
"- ruleID: <UNIQUE_RULE_ID> 1 description: <DESCRIPTION> 2 when: <CONDITION(S)> 3 message: <MESSAGE> 4 labels: <LABELS> 5 effort: <EFFORT> 6 links: - <LINKS> 7",
"when: builtin.xml: xpath: jboss-web/class-loading",
"message: The class-loading element is no longer valid in the jboss-web.xml file.",
"links: - url: https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.4/html-single/Migration_Guide/index.html#Create_or_Modify_Files_That_Control_Class_Loading_in_JBoss_Enterprise_Application_Platform_6 title: Create or Modify Files That Control Class Loading in JBoss EAP 6",
"- ruleID: jboss5-web-class-loading description: Find class loading element in JBoss XML file. when: builtin.xml: xpath: jboss-web/class-loading message: The class-loading element is no longer valid in the jboss-web.xml file. effort: 3 links: - url: https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Application_Platform/6.4/html-single/Migration_Guide/index.html#Create_or_Modify_Files_That_Control_Class_Loading_in_JBoss_Enterprise_Application_Platform_6 title: Create or Modify Files That Control Class Loading in JBoss EAP 6",
"-rules /home/<USER>/rules.yaml",
"mta-cli analyze --input /home/<USER>/data/ --output /home/<USER>/output/ --rules /home/<USER>/rules.yaml",
"INFO[0066] Static report created. Access it at this URL: URL=\"file:/home/<USER>/output/static-report/index.html\""
] | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/rules_development_guide/creating-yaml-rules_rules-development-guide |
Chapter 5. Examples | Chapter 5. Examples This chapter demonstrates the use of AMQ JMS Pool through example programs. For more examples, see the Pooled JMS examples . 5.1. Prerequisites To build the examples, Maven must be configured to use the Red Hat repository or a local repository . To run the examples, your system must have a running and configured broker . 5.2. Establishing a connection This example creates a new connection pool, binds it to a connection factory, and uses the pool to create a new connection. Example: Establishing a connection - Connect.java package net.example; import javax.jms.Connection; import javax.jms.ConnectionFactory; import org.apache.qpid.jms.JmsConnectionFactory; import org.messaginghub.pooled.jms.JmsPoolConnectionFactory; public class Connect { public static void main(String[] args) throws Exception { if (args.length != 1) { System.err.println("Usage: Connect <connection-uri>"); System.exit(1); } String connUri = args[0]; ConnectionFactory factory = new JmsConnectionFactory(connUri); JmsPoolConnectionFactory pool = new JmsPoolConnectionFactory(); try { pool.setConnectionFactory(factory); Connection conn = pool.createConnection(); conn.start(); try { System.out.println("CONNECT: Connected to '" + connUri + "'"); } finally { conn.close(); } } finally { pool.stop(); } } } 5.3. Configuring the pool This example demonstrates setting connection and session configuration options. Example: Configuring the pool - ConnectWithConfiguration.java package net.example; import javax.jms.Connection; import javax.jms.ConnectionFactory; import org.apache.qpid.jms.JmsConnectionFactory; import org.messaginghub.pooled.jms.JmsPoolConnectionFactory; public class ConnectWithConfiguration { public static void main(String[] args) throws Exception { if (args.length != 1) { System.err.println("Usage: ConnectWithConfiguration <connection-uri>"); System.exit(1); } String connUri = args[0]; ConnectionFactory factory = new JmsConnectionFactory(connUri); JmsPoolConnectionFactory pool = new JmsPoolConnectionFactory(); try { pool.setConnectionFactory(factory); // Set the max connections per user to a higher value pool.setMaxConnections(5); // Create a MessageProducer for each createProducer() call pool.setUseAnonymousProducers(false); Connection conn = pool.createConnection(); conn.start(); try { System.out.println("CONNECT: Connected to '" + connUri + "'"); } finally { conn.close(); } } finally { pool.stop(); } } } 5.4. Running the examples To compile and run the example programs, use the following procedure. Procedure Create a new project directory. This is referred to as <project-dir> in the steps that follow. Copy the example Java listings to the following locations: <project-dir>/src/main/java/net/example/Connect.java <project-dir>/src/main/java/net/example/ConnectWithConfiguration.java Use a text editor to create a new <project-dir>/pom.xml file. Add the following XML to it: <project> <modelVersion>4.0.0</modelVersion> <groupId>net.example</groupId> <artifactId>example</artifactId> <version>1.0.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.messaginghub</groupId> <artifactId>pooled-jms</artifactId> <version>1.1.1.redhat-00003</version> </dependency> <dependency> <groupId>org.apache.qpid</groupId> <artifactId>qpid-jms-client</artifactId> <version> USD{qpid-jms-version} </version> </dependency> </dependencies> </project> Replace USD{qpid-jms-version} with your preferred Qpid JMS version. Change to the project directory and use the mvn command to compile the program. mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests The addition of dependency:copy-dependencies results in the dependencies being copied into the target/dependency directory. Use the java command to run the program. On Linux or UNIX: java -cp "target/classes:target/dependency/*" net.example.Connect amqp://localhost On Windows: java -cp "target\classes;target\dependency\*" net.example.Connect amqp://localhost These sample commands run the Connect example. To run another example, replace Connect with the class name of your desired example. Running the Connect example on Linux results in the following output: USD java -cp "target/classes:target/dependency/*" net.example.Connect amqp://localhost CONNECT: Connected to 'amqp://localhost' | [
"package net.example; import javax.jms.Connection; import javax.jms.ConnectionFactory; import org.apache.qpid.jms.JmsConnectionFactory; import org.messaginghub.pooled.jms.JmsPoolConnectionFactory; public class Connect { public static void main(String[] args) throws Exception { if (args.length != 1) { System.err.println(\"Usage: Connect <connection-uri>\"); System.exit(1); } String connUri = args[0]; ConnectionFactory factory = new JmsConnectionFactory(connUri); JmsPoolConnectionFactory pool = new JmsPoolConnectionFactory(); try { pool.setConnectionFactory(factory); Connection conn = pool.createConnection(); conn.start(); try { System.out.println(\"CONNECT: Connected to '\" + connUri + \"'\"); } finally { conn.close(); } } finally { pool.stop(); } } }",
"package net.example; import javax.jms.Connection; import javax.jms.ConnectionFactory; import org.apache.qpid.jms.JmsConnectionFactory; import org.messaginghub.pooled.jms.JmsPoolConnectionFactory; public class ConnectWithConfiguration { public static void main(String[] args) throws Exception { if (args.length != 1) { System.err.println(\"Usage: ConnectWithConfiguration <connection-uri>\"); System.exit(1); } String connUri = args[0]; ConnectionFactory factory = new JmsConnectionFactory(connUri); JmsPoolConnectionFactory pool = new JmsPoolConnectionFactory(); try { pool.setConnectionFactory(factory); // Set the max connections per user to a higher value pool.setMaxConnections(5); // Create a MessageProducer for each createProducer() call pool.setUseAnonymousProducers(false); Connection conn = pool.createConnection(); conn.start(); try { System.out.println(\"CONNECT: Connected to '\" + connUri + \"'\"); } finally { conn.close(); } } finally { pool.stop(); } } }",
"<project-dir>/src/main/java/net/example/Connect.java <project-dir>/src/main/java/net/example/ConnectWithConfiguration.java",
"<project> <modelVersion>4.0.0</modelVersion> <groupId>net.example</groupId> <artifactId>example</artifactId> <version>1.0.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>org.messaginghub</groupId> <artifactId>pooled-jms</artifactId> <version>1.1.1.redhat-00003</version> </dependency> <dependency> <groupId>org.apache.qpid</groupId> <artifactId>qpid-jms-client</artifactId> <version> USD{qpid-jms-version} </version> </dependency> </dependencies> </project>",
"mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests",
"java -cp \"target/classes:target/dependency/*\" net.example.Connect amqp://localhost",
"java -cp \"target\\classes;target\\dependency\\*\" net.example.Connect amqp://localhost",
"java -cp \"target/classes:target/dependency/*\" net.example.Connect amqp://localhost CONNECT: Connected to 'amqp://localhost'"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_jms_pool_library/examples |
Chapter 2. Installing Connectivity Link in the OpenShift web console | Chapter 2. Installing Connectivity Link in the OpenShift web console You can use the OpenShift Container Platform web console to install the Red Hat Connectivity Link Operator. Note You must perform these steps on each OpenShift cluster that you want to use Connectivity Link on. Prerequisites See Chapter 1, Connectivity Link prerequisites and permissions . You have access to the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, log in with cluster-admin privileges. In the left navigation menu, click Operators > OperatorHub . In the Filter by keyword text box, enter Connectivity to find the Red Hat Connectivity Link Operator. Read the information about the Operator, and click Install to display the Operator subscription page. Select your subscription settings as follows: Update Channel : stable Version : 1.0.1 Installation mode : All namespaces on the cluster (default) . Installed namespace : Select the namespace where you want to install the Operator, for example, kuadrant-system . If the namespace does not already exist, click this field and select Create Project to create the namespace. Approval Strategy : Select Automatic or Manual . Click Install , and wait a few moments until the Operator is installed and ready for use. Click Operators > Installed Operators > Red Hat Connectivity Link . Click the Kuadrant tab, and click Create Kuadrant to create a deployment. In the Configure via field, click YAML view to edit the definition, for example, the deployment name. Click Create and wait for the deployment to be displayed in the list. Verification After you have installed the Operator, click Operators > Installed Operators to verify that the Red Hat Connectivity Link Operator and the following component Operators are installed in your namespace: Red Hat - Authorino Operator : Enables authentication and authorization for Gateways and applications in a Gateway API network. Red Hat - DNS Operator : Configures how north-south traffic from outside the network is balanced and reaches Gateways. Red Hat - Limitador Operator : Enables rate limiting for Gateways and applications in a Gateway API network. Additional resources OpenShift Operators documentation . | null | https://docs.redhat.com/en/documentation/red_hat_connectivity_link/1.0/html/installing_connectivity_link_on_openshift/rhcl-install-ocp-web-console_connectivity-link |
Chapter 1. Introduction to OSGi | Chapter 1. Introduction to OSGi Abstract The OSGi specification supports modular application development by defining a runtime framework that simplifies building, deploying, and managing complex applications. 1.1. Overview Apache Karaf is an OSGi-based runtime container for deploying and managing bundles. Apache Karaf also provides native operating system integration, and can be integrated into the operating system as a service so that the lifecycle is bound to the operating system. Apache Karaf has the following structure: Apache Karaf - a wrapper layer around the OSGi container implementation, which provides support for deploying the OSGi container as a runtime server. Runtime features provided by the Fuse include hot deployment, management, and administration features. OSGi Framework - implements OSGi functionality, including managing dependencies and bundle lifecycles 1.2. Architecture of Apache Karaf Apache Karaf extends the OSGi layers with the following functionality: Console - the console manages services, installs and manages applications and libraries, and interacts with the Fuse runtime. It provides console commands to administer instances of Fuse. See the Apache Karaf Console Reference . Logging - the logging subsystem provides console commands to display, view and change log levels. Deployment - supports both manual deployment of OSGi bundles using the bundle:install and bundle:start commands and hot deployment of applications. See Section 6.1, "Hot Deployment" . Provisioning - provides multiple mechanisms for installing applications and libraries. See Chapter 9, Deploying Features . Configuration - the properties files stored in the InstallDir /etc folder are continuously monitored, and changes to them are automatically propagated to the relevant services at configurable intervals. Blueprint - is a dependency injection framework that simplifies interaction with the OSGi container. For example, providing standard XML elements to import and export OSGi services. When a Blueprint configuration file is copied to the hot deployment folder, Red Hat Fuse generates an OSGi bundle on-the-fly and instantiates the Blueprint context. 1.3. OSGi Framework 1.3.1. Overview The OSGi Alliance is an independent organization responsible for defining the features and capabilities of the OSGi Service Platform Release 4 . The OSGi Service Platform is a set of open specifications that simplify building, deploying, and managing complex software applications. OSGi technology is often referred to as the dynamic module system for Java. OSGi is a framework for Java that uses bundles to modularly deploy Java components and handle dependencies, versioning, classpath control, and class loading. OSGi's lifecycle management allows you to load, start, and stop bundles without shutting down the JVM. OSGi provides the best runtime platform for Java, a superior class loading architecture, and a registry for services. Bundles can export services, run processes, and have their dependencies managed. Each bundle can have its requirements managed by the OSGi container. Fuse uses Apache Felix as its default OSGi implementation. The framework layers form the container where you install bundles. The framework manages the installation and updating of bundles in a dynamic, scalable manner, and manages the dependencies between bundles and services. 1.3.2. OSGi architecture The OSGi framework contains the following: Bundles - Logical modules that make up an application. See Section 1.5, "OSGi Bundles" . Service layer - Provides communication among modules and their contained components. This layer is tightly integrated with the lifecycle layer. See Section 1.4, "OSGi Services" . Lifecycle layer - Provides access to the underlying OSGi framework. This layer handles the lifecycle of individual bundles so you can manage your application dynamically, including starting and stopping bundles. Module layer - Provides an API to manage bundle packaging, dependency resolution, and class loading. Execution environment - A configuration of a JVM. This environment uses profiles that define the environment in which bundles can work. Security layer - Optional layer based on Java 2 security, with additional constraints and enhancements. Each layer in the framework depends on the layer beneath it. For example, the lifecycle layer requires the module layer. The module layer can be used without the lifecycle and service layers. 1.4. OSGi Services 1.4.1. Overview An OSGi service is a Java class or service interface with service properties defined as name/value pairs. The service properties differentiate among service providers that provide services with the same service interface. An OSGi service is defined semantically by its service interface, and it is implemented as a service object. A service's functionality is defined by the interfaces it implements. Thus, different applications can implement the same service. Service interfaces allow bundles to interact by binding interfaces, not implementations. A service interface should be specified with as few implementation details as possible. 1.4.2. OSGi service registry In the OSGi framework, the service layer provides communication between Section 1.5, "OSGi Bundles" and their contained components using the publish, find, and bind service model. The service layer contains a service registry where: Service providers register services with the framework to be used by other bundles Service requesters find services and bind to service providers Services are owned by, and run within, a bundle. The bundle registers an implementation of a service with the framework service registry under one or more Java interfaces. Thus, the service's functionality is available to other bundles under the control of the framework, and other bundles can look up and use the service. Lookup is performed using the Java interface and service properties. Each bundle can register multiple services in the service registry using the fully qualified name of its interface and its properties. Bundles use names and properties with LDAP syntax to query the service registry for services. A bundle is responsible for runtime service dependency management activities including publication, discovery, and binding. Bundles can also adapt to changes resulting from the dynamic availability (arrival or departure) of the services that are bound to the bundle. Event notification Service interfaces are implemented by objects created by a bundle. Bundles can: Register services Search for services Receive notifications when their registration state changes The OSGi framework provides an event notification mechanism so service requesters can receive notification events when changes in the service registry occur. These changes include the publication or retrieval of a particular service and when services are registered, modified, or unregistered. Service invocation model When a bundle wants to use a service, it looks up the service and invokes the Java object as a normal Java call. Therefore, invocations on services are synchronous and occur in the same thread. You can use callbacks for more asynchronous processing. Parameters are passed as Java object references. No marshalling or intermediary canonical formats are required as with XML. OSGi provides solutions for the problem of services being unavailable. OSGi framework services In addition to your own services, the OSGi framework provides the following optional services to manage the operation of the framework: Package Admin service -allows a management agent to define the policy for managing Java package sharing by examining the status of the shared packages. It also allows the management agent to refresh packages and to stop and restart bundles as required. This service enables the management agent to make decisions regarding any shared packages when an exporting bundle is uninstalled or updated. The service also provides methods to refresh exported packages that were removed or updated since the last refresh, and to explicitly resolve specific bundles. This service can also trace dependencies between bundles at runtime, allowing you to see what bundles might be affected by upgrading. Start Level service -enables a management agent to control the starting and stopping order of bundles. The service assigns each bundle a start level. The management agent can modify the start level of bundles and set the active start level of the framework, which starts and stops the appropriate bundles. Only bundles that have a start level less than, or equal to, this active start level can be active. URL Handlers service -dynamically extends the Java runtime with URL schemes and content handlers enabling any component to provide additional URL handlers. Permission Admin service -enables the OSGi framework management agent to administer the permissions of a specific bundle and to provide defaults for all bundles. A bundle can have a single set of permissions that are used to verify that it is authorized to execute privileged code. You can dynamically manipulate permissions by changing policies on the fly and by adding new policies for newly installed components. Policy files are used to control what bundles can do. Conditional Permission Admin service -extends the Permission Admin service with permissions that can apply when certain conditions are either true or false at the time the permission is checked. These conditions determine the selection of the bundles to which the permissions apply. Permissions are activated immediately after they are set. The OSGi framework services are described in detail in separate chapters in the OSGi Service Platform Release 4 specification available from the release 4 download page on the OSGi Alliance web site. OSGi Compendium services In addition to the OSGi framework services, the OSGi Alliance defines a set of optional, standardized compendium services. The OSGi compendium services provide APIs for tasks such as logging and preferences. These services are described in the OSGi Service Platform, Service Compendium available from the release 4 download page on the OSGi Alliance Web site. The Configuration Admin compendium service is like a central hub that persists configuration information and distributes it to interested parties. The Configuration Admin service specifies the configuration information for deployed bundles and ensures that the bundles receive that data when they are active. The configuration data for a bundle is a list of name-value pairs. See Section 1.2, "Architecture of Apache Karaf" . 1.5. OSGi Bundles Overview With OSGi, you modularize applications into bundles. Each bundle is a tightly coupled, dynamically loadable collection of classes, JARs, and configuration files that explicitly declare any external dependencies. In OSGi, a bundle is the primary deployment format. Bundles are applications that are packaged in JARs, and can be installed, started, stopped, updated, and removed. OSGi provides a dynamic, concise, and consistent programming model for developing bundles. Development and deployment are simplified by decoupling the service's specification (Java interface) from its implementation. The OSGi bundle abstraction allows modules to share Java classes. This is a static form of reuse. The shared classes must be available when the dependent bundle is started. A bundle is a JAR file with metadata in its OSGi manifest file. A bundle contains class files and, optionally, other resources and native libraries. You can explicitly declare which packages in the bundle are visible externally (exported packages) and which external packages a bundle requires (imported packages). The module layer handles the packaging and sharing of Java packages between bundles and the hiding of packages from other bundles. The OSGi framework dynamically resolves dependencies among bundles. The framework performs bundle resolution to match imported and exported packages. It can also manage multiple versions of a deployed bundle. Class Loading in OSGi OSGi uses a graph model for class loading rather than a tree model (as used by the JVM). Bundles can share and re-use classes in a standardized way, with no runtime class-loading conflicts. Each bundle has its own internal classpath so that it can serve as an independent unit if required. The benefits of class loading in OSGi include: Sharing classes directly between bundles. There is no requirement to promote JARs to a parent class-loader. You can deploy different versions of the same class at the same time, with no conflict. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/ESBOSGiIntro |
Chapter 1. Prerequisites | Chapter 1. Prerequisites You can use installer-provisioned installation to install OpenShift Container Platform on IBM Cloud(R) Bare Metal (Classic) nodes. This document describes the prerequisites and procedures when installing OpenShift Container Platform on IBM Cloud(R) nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud(R) deployments. A provisioning network is required. Installer-provisioned installation of OpenShift Container Platform requires: One node with Red Hat Enterprise Linux CoreOS (RHCOS) 8.x installed, for running the provisioner Three control plane nodes One routable network One provisioning network Before starting an installer-provisioned installation of OpenShift Container Platform on IBM Cloud(R) Bare Metal (Classic), address the following prerequisites and requirements. 1.1. Setting up IBM Cloud Bare Metal (Classic) infrastructure To deploy an OpenShift Container Platform cluster on IBM Cloud(R) Bare Metal (Classic) infrastructure, you must first provision the IBM Cloud(R) nodes. Important Red Hat supports IPMI and PXE on the provisioning network only. Red Hat has not tested Red Fish, virtual media, or other complementary technologies such as Secure Boot on IBM Cloud(R) deployments. The provisioning network is required. You can customize IBM Cloud(R) nodes using the IBM Cloud(R) API. When creating IBM Cloud(R) nodes, you must consider the following requirements. Use one data center per cluster All nodes in the OpenShift Container Platform cluster must run in the same IBM Cloud(R) data center. Create public and private VLANs Create all nodes with a single public VLAN and a single private VLAN. Ensure subnets have sufficient IP addresses IBM Cloud(R) public VLAN subnets use a /28 prefix by default, which provides 16 IP addresses. That is sufficient for a cluster consisting of three control plane nodes, four worker nodes, and two IP addresses for the API VIP and Ingress VIP on the baremetal network. For larger clusters, you might need a smaller prefix. IBM Cloud(R) private VLAN subnets use a /26 prefix by default, which provides 64 IP addresses. IBM Cloud(R) Bare Metal (Classic) uses private network IP addresses to access the Baseboard Management Controller (BMC) of each node. OpenShift Container Platform creates an additional subnet for the provisioning network. Network traffic for the provisioning network subnet routes through the private VLAN. For larger clusters, you might need a smaller prefix. Table 1.1. IP addresses per prefix IP addresses Prefix 32 /27 64 /26 128 /25 256 /24 Configuring NICs OpenShift Container Platform deploys with two networks: provisioning : The provisioning network is a non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. baremetal : The baremetal network is a routable network. You can use any NIC order to interface with the baremetal network, provided it is not the NIC specified in the provisioningNetworkInterface configuration setting or the NIC associated to a node's bootMACAddress configuration setting for the provisioning network. While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. For example: NIC Network VLAN NIC1 provisioning <provisioning_vlan> NIC2 baremetal <baremetal_vlan> In the example, NIC1 on all control plane and worker nodes connects to the non-routable network ( provisioning ) that is only used for the installation of the OpenShift Container Platform cluster. NIC2 on all control plane and worker nodes connects to the routable baremetal network. PXE Boot order NIC1 PXE-enabled provisioning network 1 NIC2 baremetal network. 2 Note Ensure PXE is enabled on the NIC used for the provisioning network and is disabled on all other NICs. Configuring canonical names Clients access the OpenShift Container Platform cluster nodes over the baremetal network. Configure IBM Cloud(R) subdomains or subzones where the canonical name extension is the cluster name. For example: Creating DNS entries You must create DNS A record entries resolving to unused IP addresses on the public subnet for the following: Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Control plane and worker nodes already have DNS entries after provisioning. The following table provides an example of fully qualified domain names. The API and Nameserver addresses begin with canonical name extensions. The host names of the control plane and worker nodes are examples, so you can use any host naming convention you prefer. Usage Host Name IP API api.<cluster_name>.<domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<domain> <ip> Provisioner node provisioner.<cluster_name>.<domain> <ip> Master-0 openshift-master-0.<cluster_name>.<domain> <ip> Master-1 openshift-master-1.<cluster_name>.<domain> <ip> Master-2 openshift-master-2.<cluster_name>.<domain> <ip> Worker-0 openshift-worker-0.<cluster_name>.<domain> <ip> Worker-1 openshift-worker-1.<cluster_name>.<domain> <ip> Worker-n openshift-worker-n.<cluster_name>.<domain> <ip> OpenShift Container Platform includes functionality that uses cluster membership information to generate A records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. Important After provisioning the IBM Cloud(R) nodes, you must create a DNS entry for the api.<cluster_name>.<domain> domain name on the external DNS because removing CoreDNS causes the local entry to disappear. Failure to create a DNS record for the api.<cluster_name>.<domain> domain name in the external DNS server prevents worker nodes from joining the cluster. Network Time Protocol (NTP) Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL certificates that require validation, which might fail if the date and time between the nodes are not in sync. Important Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail. Configure a DHCP server IBM Cloud(R) Bare Metal (Classic) does not run DHCP on the public or private VLANs. After provisioning IBM Cloud(R) nodes, you must set up a DHCP server for the public VLAN, which corresponds to OpenShift Container Platform's baremetal network. Note The IP addresses allocated to each node do not need to match the IP addresses allocated by the IBM Cloud(R) Bare Metal (Classic) provisioning system. See the "Configuring the public subnet" section for details. Ensure BMC access privileges The "Remote management" page for each node on the dashboard contains the node's intelligent platform management interface (IPMI) credentials. The default IPMI privileges prevent the user from making certain boot target changes. You must change the privilege level to OPERATOR so that Ironic can make those changes. In the install-config.yaml file, add the privilegelevel parameter to the URLs used to configure each BMC. See the "Configuring the install-config.yaml file" section for additional details. For example: ipmi://<IP>:<port>?privilegelevel=OPERATOR Alternatively, contact IBM Cloud(R) support and request that they increase the IPMI privileges to ADMINISTRATOR for each node. Create bare metal servers Create bare metal servers in the IBM Cloud(R) dashboard by navigating to Create resource Bare Metal Servers for Classic . Alternatively, you can create bare metal servers with the ibmcloud CLI utility. For example: USD ibmcloud sl hardware create --hostname <SERVERNAME> \ --domain <DOMAIN> \ --size <SIZE> \ --os <OS-TYPE> \ --datacenter <DC-NAME> \ --port-speed <SPEED> \ --billing <BILLING> See Installing the stand-alone IBM Cloud(R) CLI for details on installing the IBM Cloud(R) CLI. Note IBM Cloud(R) servers might take 3-5 hours to become available. | [
"<cluster_name>.<domain>",
"test-cluster.example.com",
"ipmi://<IP>:<port>?privilegelevel=OPERATOR",
"ibmcloud sl hardware create --hostname <SERVERNAME> --domain <DOMAIN> --size <SIZE> --os <OS-TYPE> --datacenter <DC-NAME> --port-speed <SPEED> --billing <BILLING>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_ibm_cloud_bare_metal_classic/install-ibm-cloud-prerequisites |
Appendix A. Using your subscription | Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions | null | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_pool_library/using_your_subscription |
Chapter 4. Installing Directory Server with Kerberos authentication behind a load balancer | Chapter 4. Installing Directory Server with Kerberos authentication behind a load balancer Installing Directory Server instances that work behind a load balancer and support Kerberos authentication require additional steps compared during the installation. If a user accesses a service using Generic Security Services API (GSSAPI), the Kerberos principal includes the DNS name of the service's host. In case the user connects to a load balancer, the principal contains the DNS name of the load balancer, for example: ldap/[email protected] , and not the DNS name of the Directory Server instance. To facilitate successful connection, the Directory Server instance that receives the request must use the same name as the load balancer, even if the load balancer DNS name is different. This section describes how to set up an Directory Server instance with Kerberos authentication support behind a load balancer. 4.1. Prerequisites The server meets the requirements of the latest Red Hat Directory Server version as described in the Red Hat Directory Server 12 Release Notes . 4.2. Installing the Directory Server packages Use the following procedure to install the Directory Server packages. Prerequisites You enabled RHEL and Directory Server repositories as described in Enabling Directory Server repositories . Procedure Enable the redhat-ds:12 module and install Directory Server packages: 4.3. Creating a .inf file for a Directory Server instance installation Create a .inf file for the dscreate utility, and adjust the file to your environment. In a later step, you will use this file to create the new Directory Server instance. Prerequisites You installed the redhat-ds:12 module. Procedure Use the dscreate create-template command to create a template .inf file. For example, to store the template in the /root/instance_name.inf file, enter: # dscreate create-template /root/instance_name.inf The created file contains all available parameters including descriptions. Edit the file that you created in the step: Uncomment the parameters that you want to set to customize the installation. All parameters have defaults. However, Red Hat recommends that you customize certain parameters for a production environment. For example, set at least the following parameters in the [slapd] section: To install an instance with the LMDB backend, set the following parameters: Note that mdb_max_size must be an integer value that depends on your directory size. For more details, see nsslapd-mdb-max-size attribute description. To use the instance behind a load balancer with GSSAPI authentication, set the full_machine_name parameter in the [general] section to the fully-qualified domain name (FQDN) of the load balancer instead of the FQDN of the Directory Server host: Uncomment the strict_host_checking parameter in the [general] section and set it to False : To automatically create a suffix during instance creation, set the following parameters in the [backend-userroot] section: Important If you do not create a suffix during instance creation, you must create it later manually before you can store data in this instance. Optional: Uncomment other parameters and set them to appropriate values for your environment. For example, use these parameters to specify replication options, such as authentication credentials and changelog trimming, or set different ports for the LDAP and LDAPS protocols. Note By default, new instances that you create include a self-signed certificate and TLS enabled. For increased security, Red Hat recommends that you do not disable this feature. Note that you can replace the self-signed certificate with a certificate issued by a Certificate Authority (CA) at a later date. Additional resources Enabling TLS-encrypted connections to Directory Server 4.4. Using a .inf file to set up a new Directory Server instance This section describes how to use a .inf file to set up a new Directory Server instance using the command line. Prerequisites You created a .inf file for the Directory Server instance. Procedure Pass the .inf file to the dscreate from-file command to create the new instance: # dscreate from-file /root/instance_name.inf Starting installation ... Validate installation settings ... Create file system structures ... Create self-signed certificate database ... Perform SELinux labeling ... Perform post-installation tasks ... Completed installation for instance: slapd-instance_name The dscreate utility automatically starts the instance and configures RHEL to start the service when the system boots. Open the required ports in the firewall: # firewall-cmd --permanent --add-port={389/tcp,636/tcp} Reload the firewall configuration: # firewall-cmd --reload 4.5. Creating a keytab for the load balancer and configuring Directory Server to use the keytab Before user can authenticate to Directory Server behind a load balancer using GSSAPI, you must create a Kerberos principal for the load balancer and configure Directory Server to use the Kerberos principal. This section describes this procedure. Prerequisites An instance that contains the following .inf file configuration: The full_machine_name parameter set to the DNS name of the load balancer. The strict_host_checking parameter set to False . Procedure Create the Kerberos principal for the load balancer, for example ldap/ loadbalancer.example.com _@ _EXAMPLE.COM . The procedure to create the service principal depends on your Kerberos installation. For details, see your Kerberos server's documentation. Optional: You can add further principals to the keytab file. For example, to enable users to connect to the Directory Server instance behind the load balancer directly using Kerberos authentication, add additional principals for the Directory Server host. For example, ldap/ server1.example.com @ EXAMPLE.COM . Copy the service keytab file to the Directory Server host, and store it, for example, in the /etc/dirsrv/slapd- instance_name /ldap.keytab file. Add the path to the service keytab to the /etc/sysconfig/slapd- instance_name file: KRB5_KTNAME= /etc/dirsrv/slapd-instance_name/ldap.keytab Restart the Directory Server instance: # dsctl instance_name restart Verification Verify that you can connect to the load balancer using the GSSAPI protocol: # ldapsearch -H ldap:// loadbalancer.example.com -Y GSSAPI If you added additional Kerberos principals to the keytab file, such as for the Directory Server host itself, also verify these connections: # ldapsearch -H ldap:// server1.example.com -Y GSSAPI | [
"dnf module enable redhat-ds:12 dnf install 389-ds-base cockpit-389-ds",
"dscreate create-template /root/instance_name.inf",
"instance_name = instance_name root_password = password",
"db_lib = mdb mdb_max_size = 21474836480",
"full_machine_name = loadbalancer.example.com",
"strict_host_checking = False",
"create_suffix_entry = True suffix = dc=example,dc=com",
"dscreate from-file /root/instance_name.inf Starting installation Validate installation settings Create file system structures Create self-signed certificate database Perform SELinux labeling Perform post-installation tasks Completed installation for instance: slapd-instance_name",
"firewall-cmd --permanent --add-port={389/tcp,636/tcp}",
"firewall-cmd --reload",
"KRB5_KTNAME= /etc/dirsrv/slapd-instance_name/ldap.keytab",
"dsctl instance_name restart",
"ldapsearch -H ldap:// loadbalancer.example.com -Y GSSAPI",
"ldapsearch -H ldap:// server1.example.com -Y GSSAPI"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/installing_red_hat_directory_server/assembly_installing-directory-server-with-kerberos-authentication-behind-a-load-balancer_installing-rhds |
Chapter 1. Introduction | Chapter 1. Introduction Red Hat Gluster Storage is a software only, scale-out storage solution that provides flexible and agile unstructured data storage for the enterprise. Red Hat Gluster Storage provides new opportunities to unify data storage and infrastructure, increase performance, and improve availability and manageability to meet a broader set of the storage challenges and needs of an organization. GlusterFS, a key building block of Red Hat Gluster Storage, is based on a stackable user space design and can deliver exceptional performance for diverse workloads. GlusterFS aggregates various storage servers over different network interfaces and connects them to form a single large parallel network file system. The POSIX compatible GlusterFS servers use XFS file system format to store data on disks. These servers can be accessed using industry standard access protocols including Network File System (NFS) and Server Message Block SMB (also known as CIFS). Red Hat Gluster Storage Servers for On-premises can be used in the deployment of private clouds or data centers. Red Hat Gluster Storage can be installed on commodity servers and storage hardware resulting in a powerful, massively scalable, and highly available NAS environment. Additionally, Red Hat Gluster Storage can be deployed in the public cloud using Red Hat Gluster Storage Server for Public Cloud with Amazon Web Services (AWS), Microsoft Azure, or Google Cloud. It delivers all the features and functionality possible in a private cloud or data center to the public cloud by providing massively scalable and high available NAS in the cloud. Red Hat Gluster Storage Server for On-premises Red Hat Gluster Storage Server for On-premises enables enterprises to treat physical storage as a virtualized, scalable, and centrally managed pool of storage by using commodity servers and storage hardware. Red Hat Gluster Storage Server for Public Cloud Red Hat Gluster Storage Server for Public Cloud packages GlusterFS for deploying scalable NAS in AWS, Microsoft Azure, and Google Cloud. This powerful storage server provides a highly available, scalable, virtualized, and centrally managed pool of storage for users of these public cloud providers. | null | https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/3.5_release_notes/chap-documentation-3.5_release_notes-introduction_chapter |
7.2.2. Saving and Restoring iptables Rules | 7.2.2. Saving and Restoring iptables Rules Firewall rules are only valid for the time the computer is on; so, if the system is rebooted, the rules are automatically flushed and reset. To save the rules so that they are loaded later, use the following command: The rules are stored in the file /etc/sysconfig/iptables and are applied whenever the service is started or restarted, including when the machine is rebooted. | [
"/sbin/service iptables save"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-firewall-ipt-act-sav |
7.153. netcf | 7.153. netcf 7.153.1. RHBA-2013:0494 - netcf bug fix update Updated netcf packages that fix one bug are now available for Red Hat Enterprise Linux 6. The netcf packages contain a library for modifying the network configuration of a system. Network configuration is expressed in a platform-independent XML format, which netcf translates into changes to the system's "native" network configuration files. Bug Fix BZ# 886862 Previously, the netcf utility had been calling the nl_cache_mngt_provide() function in the libnl library, which was not thread-safe. Consequently, the libvirtd daemon could terminate unexpectedly. As nl_cache_mngt_provide() was not necessary for proper operation, it is no longer called by netcf, thus preventing this bug. Users of netcf are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/netcf |
15.2.2.2. Conflicting Files | 15.2.2.2. Conflicting Files If you attempt to install a package that contains a file which has already been installed by another package or an earlier version of the same package, the following is displayed: To make RPM ignore this error, use the --replacefiles option: | [
"Preparing... ########################################### [100%] file /usr/bin/foo from install of foo-1.0-1 conflicts with file from package bar-2.0.20",
"-ivh --replacefiles foo-1.0-1.i386.rpm"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Installing-Conflicting_Files |
5.163. libvirt-cim | 5.163. libvirt-cim 5.163.1. RHBA-2012:0757 - libvirt-cim bug fix and enhancement update An updated libvirt-cim package that fixes various bugs and adds multiple enhancements is now available for Red Hat Enterprise Linux 6. The libvirt-cim package contains a Common Information Model (CIM) provider based on Common Manageability Programming Interface (CMPI). It supports most libvirt virtualization features and allows management of multiple libvirt-based platforms. The libvirt-cim package has been upgraded to upstream version 0.6.1, which provides a number of bug fixes and enhancements over the version. (BZ# 739154 ) Bug Fix BZ# 799037 Previously, the libvirt-cim package required as its dependency the tog-pegasus package, which contains the OpenPegasus Web-Based Enterprise Management (WBEM) services. This is, however, incorrect as libvirt-cim should not require specifically tog-pegasus but any CIM server. With this update, libvirt-cim has been changed to require cim-server instead. The spec files of libvirt-cim and sblim-sfcb have been modified appropriately and libvirt-cim now uses either of the packages as its dependency. Enhancements BZ# 633338 Extension for Quality-of-Service (QoS) networking has been added. BZ# 739153 Support for domain events has been added. BZ# 739156 Extensions for networking of Central Processing Unit (CPU) shares have been added. All libvirt-cim users are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/libvirt-cim |
Chapter 2. Get started using the Insights for RHEL malware detection service | Chapter 2. Get started using the Insights for RHEL malware detection service To begin using the malware detection service, you must perform the following actions. Procedures for each action follow in this chapter. Note Some procedures require sudo access on the system and others require that the administrator performing the actions be a member of a User Access group with the Malware detection administrator role. Table 2.1. Procedure and access requirements to set up malware detection service. Action Description Required privileges Install YARA and configure the Insights client Install the YARA application and configure the Insights client to use the malware detection service Sudo access Configure User Access on the Red Hat Hybrid Cloud Console In Red Hat Hybrid Cloud Console > the Settings icon (⚙) > Identity & Access Management > User Access > Groups , create malware detection groups, and then add the appropriate roles and members to the groups Organization Administrator on the Red Hat account View results See the results of system scans in the Hybrid Cloud Console Membership in a User Access group with the Malware detection viewer role 2.1. Installing YARA and configuring the Insights client Perform the following procedure to install YARA and the malware detection controller on the RHEL system, then run test and full malware detection scans and report data to the Insights for Red Hat Enterprise Linux application. Prerequisites The system operating system version must be RHEL8 or RHEL9. The administrator must have sudo access on the system. The system must have the Insights client package installed, and be registered to Insights for Red Hat Enterprise Linux. Procedure Install YARA. Yara RPMs for RHEL8 and RHEL9 are available on the Red Hat Customer Portal: Note Insights for Red Hat Enterprise Linux malware detection is not supported on RHEL7. If not yet completed, register the system with Insights for Red Hat Enterprise Linux. Important The Insights client package must be installed on the system and the system registered with Insights for Red Hat Enterprise Linux before the malware detection service can be used. Install the Insights client RPM. Test the connection to Insights for Red Hat Enterprise Linux. Register the system with Insights for Red Hat Enterprise Linux. Run the Insights client malware detection collector. The collector takes the following actions for this initial run: Creates a malware detection configuration file in /etc/insights-client/malware-detection-config.yml Performs a test scan and uploads the results Note This is a very minimal scan of your system with a simple test rule. The test scan is mainly to help verify that the installation, operation, and uploads are working correctly for the malware detection service. There will be a couple of matches found but this is intentional and nothing to worry about. Results from the initial test scan will not appear in the malware detection service UI. Perform a full filesystem scan. Edit /etc/insights-client/malware-detection-config.yml and set the test_scan option to false. test_scan: false Consider setting the following options to minimize scan time: filesystem_scan_only - to only scan certain directories on the system filesystem_scan_exclude - to exclude certain directories from being scanned filesystem_scan_since - to scan only recently modified files Re-run the client collector: Optionally, scan processes. This will scan the filesystem first, followed by a scan of all processes. After the filesystem and process scans are complete, view the results at Security > Malware . Important By default, scanning processes is disabled. There is an issue with YARA and scanning processes on Linux systems that may cause poor system performance. This problem will be fixed in an upcoming release of YARA, but until then it is recommended to NOT scan processes . To enable process scanning, set scan_processes: true in /etc/insights-client/malware-detection-config.yml . scan_processes: true Note Consider setting these processes related options while you are there: processes_scan_only - to only scan certain processes on the system processess_scan_exclude - to exclude certain processes from being scanned processes_scan_since - to scan only recently started processes Save the changes and run the collector again. 2.2. User Access settings in the Red Hat Hybrid Cloud Console All users on your account have access to most of the data in Insights for Red Hat Enterprise Linux. 2.2.1. Predefined User Access groups and roles To make groups and roles easier to manage, Red Hat provides two predefined groups and a set of predefined roles. 2.2.1.1. Predefined groups The Default access group contains all users in your organization. Many predefined roles are assigned to this group. It is automatically updated by Red Hat. Note If the Organization Administrator makes changes to the Default access group its name changes to Custom default access group and it is no longer updated by Red Hat. The Default admin access group contains only users who have Organization Administrator permissions. This group is automatically maintained and users and roles in this group cannot be changed. 2.2.2. User Access roles for the Malware detection service The following predefined roles on the Red Hat Hybrid Cloud Console enable access to malware detection features in Insights for Red Hat Enterprise Linux. Important There is no "default-group" role for malware detection service users. For users to be able to view data or control settings in the malware detection service, they must be members of the User Access group with one of the following roles: Table 2.2. Permissions provided by the User Access roles User Access Role Permissions Malware detection viewer Read All Malware detection administrator Read All Set user acknowledgment Delete hits Disable signatures permissions 2.3. Viewing malware detection scan results in the Red Hat Hybrid Cloud Console View results of system scans on the Hybrid Cloud Console. Prerequisites YARA and the Insights client are installed and configured on the RHEL system. You must be logged into the Hybrid Cloud Console. You are a member of a Hybrid Cloud Console User Access group with the Malware detection administrator or Malware detection viewer role . Procedures Navigate to Security > Malware > Systems . View the dashboard to get a quick synopsis of all of your RHEL systems with malware detection enabled and reporting results. To see results for a specific system, use the Filter by name search box to search for the system by name. | [
"sudo dnf install yara",
"sudo yum install insights-client",
"sudo insights-client --test-connection",
"sudo insights-client --register",
"sudo insights-client --collector malware-detection",
"test_scan: false",
"sudo insights-client --collector malware-detection",
"scan_processes: true",
"sudo insights-client --collector malware-detection"
] | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_reporting_malware_signatures_on_rhel_systems_with_fedramp/malware-svc-getting-started |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_cryostat_to_manage_a_jfr_recording/making-open-source-more-inclusive |
A.3. Fsync | A.3. Fsync Fsync is known as an I/O expensive operation, but this is is not completely true. Firefox used to call the sqlite library each time the user clicked on a link to go to a new page. Sqlite called fsync and because of the file system settings (mainly ext3 with data-ordered mode), there was a long latency when nothing happened. This could take a long time (up to 30 seconds) if another process was copying a large file at the same time. However, in other cases, where fsync wasn't used at all, problems emerged with the switch to the ext4 file system. Ext3 was set to data-ordered mode, which flushed memory every few seconds and saved it to a disk. But with ext4 and laptop_mode, the interval between saves was longer and data might get lost when the system was unexpectedly switched off. Now ext4 is patched, but we must still consider the design of our applications carefully, and use fsync as appropriate. The following simple example of reading and writing into a configuration file shows how a backup of a file can be made or how data can be lost: /* open and read configuration file e.g. ./myconfig */ fd = open("./myconfig", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); ... fd = open("./myconfig", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR); write(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); A better approach would be: /* open and read configuration file e.g. ./myconfig */ fd = open("./myconfig", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); ... fd = open("./myconfig.suffix", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR write(fd, myconfig_buf, sizeof(myconfig_buf)); fsync(fd); /* paranoia - optional */ ... close(fd); rename("./myconfig", "./myconfig~"); /* paranoia - optional */ rename("./myconfig.suffix", "./myconfig"); | [
"/* open and read configuration file e.g. ./myconfig */ fd = open(\"./myconfig\", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); fd = open(\"./myconfig\", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR); write(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd);",
"/* open and read configuration file e.g. ./myconfig */ fd = open(\"./myconfig\", O_RDONLY); read(fd, myconfig_buf, sizeof(myconfig_buf)); close(fd); fd = open(\"./myconfig.suffix\", O_WRONLY | O_TRUNC | O_CREAT, S_IRUSR | S_IWUSR write(fd, myconfig_buf, sizeof(myconfig_buf)); fsync(fd); /* paranoia - optional */ close(fd); rename(\"./myconfig\", \"./myconfig~\"); /* paranoia - optional */ rename(\"./myconfig.suffix\", \"./myconfig\");"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/power_management_guide/developer_tips-fsync |
Chapter 3. Commonly required logs for troubleshooting | Chapter 3. Commonly required logs for troubleshooting Some of the commonly used logs for troubleshooting OpenShift Data Foundation are listed, along with the commands to generate them. Generating logs for a specific pod: Generating logs for Ceph or OpenShift Data Foundation cluster: Important Currently, the rook-ceph-operator logs do not provide any information about the failure and this acts as a limitation in troubleshooting issues, see Enabling and disabling debug logs for rook-ceph-operator . Generating logs for plugin pods like cephfs or rbd to detect any problem in the PVC mount of the app-pod: To generate logs for all the containers in the CSI pod: Generating logs for cephfs or rbd provisioner pods to detect problems if PVC is not in BOUND state: To generate logs for all the containers in the CSI pod: Generating OpenShift Data Foundation logs using cluster-info command: When using Local Storage Operator, generating logs can be done using cluster-info command: Check the OpenShift Data Foundation operator logs and events. To check the operator logs : <ocs-operator> To check the operator events : Get the OpenShift Data Foundation operator version and channel. Example output : Example output : Confirm that the installplan is created. Verify the image of the components post updating OpenShift Data Foundation. Check the node on which the pod of the component you want to verify the image is running. For Example : Example output: dell-r440-12.gsslab.pnq2.redhat.com is the node-name . Check the image ID. <node-name> Is the name of the node on which the pod of the component you want to verify the image is running. For Example : Take a note of the IMAGEID and map it to the Digest ID on the Rook Ceph Operator page. Additional resources Using must-gather 3.1. Adjusting verbosity level of logs The amount of space consumed by debugging logs can become a significant issue. Red Hat OpenShift Data Foundation offers a method to adjust, and therefore control, the amount of storage to be consumed by debugging logs. In order to adjust the verbosity levels of debugging logs, you can tune the log levels of the containers responsible for container storage interface (CSI) operations. In the container's yaml file, adjust the following parameters to set the logging levels: CSI_LOG_LEVEL - defaults to 5 CSI_SIDECAR_LOG_LEVEL - defaults to 1 The supported values are 0 through 5 . Use 0 for general useful logs, and 5 for trace level verbosity. | [
"oc logs <pod-name> -n <namespace>",
"oc logs rook-ceph-operator-<ID> -n openshift-storage",
"oc logs csi-cephfsplugin-<ID> -n openshift-storage -c csi-cephfsplugin",
"oc logs csi-rbdplugin-<ID> -n openshift-storage -c csi-rbdplugin",
"oc logs csi-cephfsplugin-<ID> -n openshift-storage --all-containers",
"oc logs csi-rbdplugin-<ID> -n openshift-storage --all-containers",
"oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage -c csi-cephfsplugin",
"oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage -c csi-rbdplugin",
"oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage --all-containers",
"oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage --all-containers",
"oc cluster-info dump -n openshift-storage --output-directory=<directory-name>",
"oc cluster-info dump -n openshift-local-storage --output-directory=<directory-name>",
"oc logs <ocs-operator> -n openshift-storage",
"oc get pods -n openshift-storage | grep -i \"ocs-operator\" | awk '{print USD1}'",
"oc get events --sort-by=metadata.creationTimestamp -n openshift-storage",
"oc get csv -n openshift-storage",
"NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.15.0 NooBaa Operator 4.15.0 Succeeded ocs-operator.v4.15.0 OpenShift Container Storage 4.15.0 Succeeded odf-csi-addons-operator.v4.15.0 CSI Addons 4.15.0 Succeeded odf-operator.v4.15.0 OpenShift Data Foundation 4.15.0 Succeeded",
"oc get subs -n openshift-storage",
"NAME PACKAGE SOURCE CHANNEL mcg-operator-stable-4.15-redhat-operators-openshift-marketplace mcg-operator redhat-operators stable-4.15 ocs-operator-stable-4.15-redhat-operators-openshift-marketplace ocs-operator redhat-operators stable-4.15 odf-csi-addons-operator odf-csi-addons-operator redhat-operators stable-4.15 odf-operator odf-operator redhat-operators stable-4.15",
"oc get installplan -n openshift-storage",
"oc get pods -o wide | grep <component-name>",
"oc get pods -o wide | grep rook-ceph-operator",
"rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 dell-r440-12.gsslab.pnq2.redhat.com <none> <none> <none> <none>",
"oc debug node/<node name>",
"chroot /host",
"crictl images | grep <component>",
"crictl images | grep rook-ceph"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/troubleshooting_openshift_data_foundation/commonly-required-logs_rhodf |
7.2. Overcommitting Memory | 7.2. Overcommitting Memory Guest virtual machines running on a KVM hypervisor do not have dedicated blocks of physical RAM assigned to them. Instead, each guest virtual machine functions as a Linux process where the host physical machine's Linux kernel allocates memory only when requested. In addition the host's memory manager can move the guest virtual machine's memory between its own physical memory and swap space. Overcommitting requires allotting sufficient swap space on the host physical machine to accommodate all guest virtual machines as well as enough memory for the host physical machine's processes. As a basic rule, the host physical machine's operating system requires a maximum of 4 GB of memory along with a minimum of 4 GB of swap space. For advanced instructions on determining an appropriate size for the swap partition, see the Red Hat Knowledgebase . Important Overcommitting is not an ideal solution for general memory issues. The recommended methods to deal with memory shortage are to allocate less memory per guest, add more physical memory to the host, or utilize swap space. A virtual machine will run slower if it is swapped frequently. In addition, overcommitting can cause the system to run out of memory (OOM), which may lead to the Linux kernel shutting down important system processes. If you decide to overcommit memory, ensure sufficient testing is performed. Contact Red Hat support for assistance with overcommitting. Overcommitting does not work with all virtual machines, but has been found to work in a desktop virtualization setup with minimal intensive usage or running several identical guests with KSM. For more information on KSM and overcommitting, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide . Important Memory overcommit is not supported with device assignment. This is because when device assignment is in use, all virtual machine memory must be statically pre-allocated to enable direct memory access (DMA) with the assigned device. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-overcommitting_with_kvm-overcommitting_memory |
B.2. The URI Failed to Connect to the Hypervisor | B.2. The URI Failed to Connect to the Hypervisor Several different errors can occur when connecting to the server (for example, when running virsh ). B.2.1. Cannot read CA certificate Symptom When running a command, the following error (or similar) appears: Investigation The error message is misleading about the actual cause. This error can be caused by a variety of factors, such as an incorrectly specified URI, or a connection that is not configured. Solution Incorrectly specified URI When specifying qemu://system or qemu://session as a connection URI, virsh attempts to connect to host names system or session respectively. This is because virsh recognizes the text after the second forward slash as the host. Use three forward slashes to connect to the local host. For example, specifying qemu:///system instructs virsh connect to the system instance of libvirtd on the local host. When a host name is specified, the QEMU transport defaults to TLS . This results in certificates. Connection is not configured The URI is correct (for example, qemu[+tls]://server/system ) but the certificates are not set up properly on your machine. For information on configuring TLS, see Setting up libvirt for TLS available from the libvirt website. | [
"virsh -c name_of_uri list error: Cannot read CA certificate '/etc/pki/CA/cacert.pem': No such file or directory error: failed to connect to the hypervisor"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_host_configuration_and_guest_installation_guide/app_hypervisor_connection_fail |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Add a reporter name. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/deploying_and_managing_streams_for_apache_kafka_on_openshift/proc-providing-feedback-on-redhat-documentation |
Chapter 25. Delegating permissions to user groups to manage users using IdM CLI | Chapter 25. Delegating permissions to user groups to manage users using IdM CLI Delegation is one of the access control methods in IdM, along with self-service rules and role-based access control (RBAC). You can use delegation to assign permissions to one group of users to manage entries for another group of users. This section covers the following topics: Delegation rules Creating a delegation rule using IdM CLI Viewing existing delegation rules using IdM CLI Modifying a delegation rule using IdM CLI Deleting a delegation rule using IdM CLI 25.1. Delegation rules You can delegate permissions to user groups to manage users by creating delegation rules . Delegation rules allow a specific user group to perform write (edit) operations on specific attributes for users in another user group. This form of access control rule is limited to editing the values of a subset of attributes you specify in a delegation rule; it does not grant the ability to add or remove whole entries or control over unspecified attributes. Delegation rules grant permissions to existing user groups in IdM. You can use delegation to, for example, allow the managers user group to manage selected attributes of users in the employees user group. 25.2. Creating a delegation rule using IdM CLI Follow this procedure to create a delegation rule using the IdM CLI. Prerequisites You are logged in as a member of the admins group. Procedure Enter the ipa delegation-add command. Specify the following options: --group : the group who is being granted permissions to the entries of users in the user group. --membergroup : the group whose entries can be edited by members of the delegation group. --permissions : whether users will have the right to view the given attributes ( read ) and add or change the given attributes ( write ). If you do not specify permissions, only the write permission will be added. --attrs : the attributes which users in the member group are allowed to view or edit. For example: 25.3. Viewing existing delegation rules using IdM CLI Follow this procedure to view existing delegation rules using the IdM CLI. Prerequisites You are logged in as a member of the admins group. Procedure Enter the ipa delegation-find command: 25.4. Modifying a delegation rule using IdM CLI Follow this procedure to modify an existing delegation rule using the IdM CLI. Important The --attrs option overwrites whatever the list of supported attributes was, so always include the complete list of attributes along with any new attributes. This also applies to the --permissions option. Prerequisites You are logged in as a member of the admins group. Procedure Enter the ipa delegation-mod command with the desired changes. For example, to add the displayname attribute to the basic manager attributes example rule: 25.5. Deleting a delegation rule using IdM CLI Follow this procedure to delete an existing delegation rule using the IdM CLI. Prerequisites You are logged in as a member of the admins group. Procedure Enter the ipa delegation-del command. When prompted, enter the name of the delegation rule you want to delete: | [
"ipa delegation-add \"basic manager attributes\" --permissions=read --permissions=write --attrs=businesscategory --attrs=departmentnumber --attrs=employeetype --attrs=employeenumber --group=managers --membergroup=employees ------------------------------------------- Added delegation \"basic manager attributes\" ------------------------------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeetype, employeenumber Member user group: employees User group: managers",
"ipa delegation-find -------------------- 1 delegation matched -------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeenumber, employeetype Member user group: employees User group: managers ---------------------------- Number of entries returned 1 ----------------------------",
"ipa delegation-mod \"basic manager attributes\" --attrs=businesscategory --attrs=departmentnumber --attrs=employeetype --attrs=employeenumber --attrs=displayname ---------------------------------------------- Modified delegation \"basic manager attributes\" ---------------------------------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeetype, employeenumber, displayname Member user group: employees User group: managers",
"ipa delegation-del Delegation name: basic manager attributes --------------------------------------------- Deleted delegation \"basic manager attributes\" ---------------------------------------------"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/delegating-permissions-to-user-groups-to-manage-users-using-idm-cli_configuring-and-managing-idm |
Appendix B. Mod_jk connector module | Appendix B. Mod_jk connector module The Apache Tomcat Connector, mod_jk , is a web server plug-in that the Apache Tomcat project provides. The Apache HTTP Server can use the mod_jk module to load-balance HTTP client requests to back-end servlet containers, while maintaining sticky sessions and communicating over the Apache JServ Protocol (AJP). The mod_jk module is included in the Apache HTTP Server part of a JBoss Core Services installation. The mod_jk module requires that you create both a mod_jk.conf file and a workers.properties file on the Apache HTTP Server host. The mod_jk.conf file specifies settings to load and configure the mod_jk.so module. The workers.properties file specifies back-end worker node details. You must also configure some settings on the [JWSShortName] host to enable mod_jk support. Additional resources Load balancing with the Apache Tomcat Connector ( mod_jk ) Workers.properties file for mod_jk | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/apache_http_server_connectors_and_load_balancing_guide/ref_mod-jk-so_http-connectors-lb-guide |
Chapter 11. Configuring CPU feature flags for instances | Chapter 11. Configuring CPU feature flags for instances You can enable or disable CPU feature flags for an instance without changing the settings on the host Compute node and rebooting the Compute node. By configuring the standard set of CPU feature flags that are applied to instances, you are helping to achieve live migration compatibility across Compute nodes. You are also helping to manage the performance and security of the instances, by disabling flags that have a negative impact on the security or performance of the instances with a particular CPU model, or enabling flags that provide mitigation from a security problem or alleviates performance problems. 11.1. Prerequisites The CPU model and feature flags must be supported by the hardware and software of the host Compute node: To check the hardware your host supports, enter the following command on the Compute node: To check the CPU models supported on your host, enter the following command on the Compute node: Replace <arch> with the name of the architecture, for example, x86_64 . 11.2. Configuring CPU feature flags for instances Configure the Compute service to apply CPU feature flags to instances with specific vCPU models. Procedure Log in to the undercloud as the stack user. Source the stackrc file: Open your Compute environment file. Configure the instance CPU mode: Replace <cpu_mode> with the CPU mode of each instance on the Compute node. Set to one of the following valid values: host-model : (Default) Use the CPU model of the host Compute node. Use this CPU mode to automatically add critical CPU flags to the instance to provide mitigation from security flaws. custom : Use to configure the specific CPU models each instance should use. Note You can also set the CPU mode to host-passthrough to use the same CPU model and feature flags as the Compute node for the instances hosted on that Compute node. Optional: If you set NovaLibvirtCPUMode to custom , configure the instance CPU models that you want to customise: Replace <cpu_model> with a comma-separated list of the CPU models that the host supports. List the CPU models in order, placing the more common and less advanced CPU models first in the list, and the more feature-rich CPU models last, for example, SandyBridge,IvyBridge,Haswell,Broadwell . For a list of model names, see /usr/share/libvirt/cpu_map.xml , or enter the following command on the host Compute node: Replace <arch> with the name of the architecture of the Compute node, for example, x86_64 . Configure the CPU feature flags for instances with the specified CPU models: Replace <cpu_feature_flags> with a comma-separated list of feature flags to enable or disable. Prefix each flag with "+" to enable the flag, or "-" to disable it. If a prefix is not specified, the flag is enabled. For a list of the available feature flags for a given CPU model, see /usr/share/libvirt/cpu_map/*.xml . The following example enables the CPU feature flags pcid and ssbd for the IvyBridge and Cascadelake-Server models, and disables the feature flag mtrr . Add your Compute environment file to the stack with your other environment files and deploy the overcloud: | [
"cat /proc/cpuinfo",
"sudo podman exec -it nova_libvirt virsh cpu-models <arch>",
"[stack@director ~]USD source ~/stackrc",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: <cpu_mode>",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: <cpu_model>",
"sudo podman exec -it nova_libvirt virsh cpu-models <arch>",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUModelExtraFlags: <cpu_feature_flags>",
"parameter_defaults: ComputeParameters: NovaLibvirtCPUMode: 'custom' NovaLibvirtCPUModels: 'IvyBridge','Cascadelake-Server' NovaLibvirtCPUModelExtraFlags: 'pcid,+ssbd,-mtrr'",
"(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-cpu-feature-flags-for-instances_instance-cpu-feature-flags |
Chapter 4. Creating an AWS-STS-backed backingstore | Chapter 4. Creating an AWS-STS-backed backingstore Amazon Web Services Security Token Service (AWS STS) is an AWS feature and it is a way to authenticate using short-lived credentials. Creating an AWS-STS-backed backingstore involves the following: Creating an AWS role using a script, which helps to get the temporary security credentials for the role session Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Creating backingstore in AWS STS OpenShift cluster 4.1. Creating an AWS role using a script You need to create a role and pass the role Amazon resource name (ARN) while installing the OpenShift Data Foundation operator. Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Procedure Create an AWS role using a script that matches OpenID Connect (OIDC) configuration for Multicloud Object Gateway (MCG) on OpenShift Data Foundation. The following example shows the details that are required to create the role: where 123456789123 Is the AWS account ID mybucket Is the bucket name (using public bucket configuration) us-east-2 Is the AWS region openshift-storage Is the namespace name Sample script 4.1.1. Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Procedure Install OpenShift Data Foundation Operator from the Operator Hub. During the installation add the role ARN in the ARN Details field. Make sure that the Update approval field is set to Manual . 4.1.2. Creating a new AWS STS backingstore Prerequisites Configure Red Hat OpenShift Container Platform cluster with AWS STS. For more information, see Configuring an AWS cluster to use short-term credentials . Create an AWS role using a script that matches OpenID Connect (OIDC) configuration. For more information, see Creating an AWS role using a script . Install OpenShift Data Foundation Operator. For more information, see Installing OpenShift Data Foundation operator in AWS STS OpenShift cluster . Procedure Install Multicloud Object Gateway (MCG). It is installed with the default backingstore by using the short-lived credentials. After the MCG system is ready, you can create more backingstores of the type aws-sts-s3 using the following MCG command line interface command: where backingstore-name Name of the backingstore aws-sts-role-arn The AWS STS role ARN which will assume role region The AWS bucket region target-bucket The target bucket name on the cloud | [
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::123456789123:oidc-provider/mybucket-oidc.s3.us-east-2.amazonaws.com\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"mybucket-oidc.s3.us-east-2.amazonaws.com:sub\": [ \"system:serviceaccount:openshift-storage:noobaa\", \"system:serviceaccount:openshift-storage:noobaa-core\", \"system:serviceaccount:openshift-storage:noobaa-endpoint\" ] } } } ] }",
"#!/bin/bash set -x This is a sample script to help you deploy MCG on AWS STS cluster. This script shows how to create role-policy and then create the role in AWS. For more information see: https://docs.openshift.com/rosa/authentication/assuming-an-aws-iam-role-for-a-service-account.html WARNING: This is a sample script. You need to adjust the variables based on your requirement. Variables : user variables - REPLACE these variables with your values: ROLE_NAME=\"<role-name>\" # role name that you pick in your AWS account NAMESPACE=\"<namespace>\" # namespace name where MCG is running. For OpenShift Data Foundation, it is openshift-storage. MCG variables SERVICE_ACCOUNT_NAME_1=\"noobaa\" # The service account name of deployment operator SERVICE_ACCOUNT_NAME_2=\"noobaa-endpoint\" # The service account name of deployment endpoint SERVICE_ACCOUNT_NAME_3=\"noobaa-core\" # The service account name of statefulset core AWS variables Make sure these values are not empty (AWS_ACCOUNT_ID, OIDC_PROVIDER) AWS_ACCOUNT_ID is your AWS account number AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query \"Account\" --output text) If you want to create the role before using the cluster, replace this field too. The OIDC provider is in the structure: 1) <OIDC-bucket>.s3.<aws-region>.amazonaws.com. for OIDC bucket configurations are in an S3 public bucket 2) `<characters>.cloudfront.net` for OIDC bucket configurations in an S3 private bucket with a public CloudFront distribution URL OIDC_PROVIDER=USD(oc get authentication cluster -ojson | jq -r .spec.serviceAccountIssuer | sed -e \"s/^https:\\/\\///\") the permission (S3 full access) POLICY_ARN_STRINGS=\"arn:aws:iam::aws:policy/AmazonS3FullAccess\" Creating the role (with AWS command line interface) read -r -d '' TRUST_RELATIONSHIP <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_PROVIDER}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_1}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_2}\", \"system:serviceaccount:USD{NAMESPACE}:USD{SERVICE_ACCOUNT_NAME_3}\" ] } } } ] } EOF echo \"USD{TRUST_RELATIONSHIP}\" > trust.json aws iam create-role --role-name \"USDROLE_NAME\" --assume-role-policy-document file://trust.json --description \"role for demo\" while IFS= read -r POLICY_ARN; do echo -n \"Attaching USDPOLICY_ARN ... \" aws iam attach-role-policy --role-name \"USDROLE_NAME\" --policy-arn \"USD{POLICY_ARN}\" echo \"ok.\" done <<< \"USDPOLICY_ARN_STRINGS\"",
"noobaa backingstore create aws-sts-s3 <backingstore-name> --aws-sts-arn=<aws-sts-role-arn> --region=<region> --target-bucket=<target-bucket>"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_amazon_web_services/creating-an-aws-sts-backed-backingstore_mcg-verify |
Chapter 10. Viewing audit logs | Chapter 10. Viewing audit logs OpenShift Container Platform auditing provides a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. 10.1. About the API audit log Audit works at the API server level, logging all requests coming to the server. Each audit log contains the following information: Table 10.1. Audit log fields Field Description level The audit level at which the event was generated. auditID A unique audit ID, generated for each request. stage The stage of the request handling when this event instance was generated. requestURI The request URI as sent by the client to a server. verb The Kubernetes verb associated with the request. For non-resource requests, this is the lowercase HTTP method. user The authenticated user information. impersonatedUser Optional. The impersonated user information, if the request is impersonating another user. sourceIPs Optional. The source IPs, from where the request originated and any intermediate proxies. userAgent Optional. The user agent string reported by the client. Note that the user agent is provided by the client, and must not be trusted. objectRef Optional. The object reference this request is targeted at. This does not apply for List -type requests, or non-resource requests. responseStatus Optional. The response status, populated even when the ResponseObject is not a Status type. For successful responses, this will only include the code. For non-status type error responses, this will be auto-populated with the error message. requestObject Optional. The API object from the request, in JSON format. The RequestObject is recorded as is in the request (possibly re-encoded as JSON), prior to version conversion, defaulting, admission or merging. It is an external versioned object type, and might not be a valid object on its own. This is omitted for non-resource requests and is only logged at request level and higher. responseObject Optional. The API object returned in the response, in JSON format. The ResponseObject is recorded after conversion to the external type, and serialized as JSON. This is omitted for non-resource requests and is only logged at response level. requestReceivedTimestamp The time that the request reached the API server. stageTimestamp The time that the request reached the current audit stage. annotations Optional. An unstructured key value map stored with an audit event that may be set by plugins invoked in the request serving chain, including authentication, authorization and admission plugins. Note that these annotations are for the audit event, and do not correspond to the metadata.annotations of the submitted object. Keys should uniquely identify the informing component to avoid name collisions, for example podsecuritypolicy.admission.k8s.io/policy . Values should be short. Annotations are included in the metadata level. Example output for the Kubernetes API server: {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"ad209ce1-fec7-4130-8192-c4cc63f1d8cd","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s","verb":"update","user":{"username":"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client","uid":"dd4997e3-d565-4e37-80f8-7fc122ccd785","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-controller-manager","system:authenticated"]},"sourceIPs":["::1"],"userAgent":"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat","objectRef":{"resource":"configmaps","namespace":"openshift-kube-controller-manager","name":"cert-recovery-controller-lock","uid":"5c57190b-6993-425d-8101-8337e48c7548","apiVersion":"v1","resourceVersion":"574307"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-04-02T08:27:20.200962Z","stageTimestamp":"2020-04-02T08:27:20.206710Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:kube-controller-manager-recovery\" of ClusterRole \"cluster-admin\" to ServiceAccount \"localhost-recovery-client/openshift-kube-controller-manager\""}} 10.2. Viewing the audit logs You can view the logs for the OpenShift API server, Kubernetes API server, OpenShift OAuth API server, and OpenShift OAuth server for each control plane node. Procedure To view the audit logs: View the OpenShift API server audit logs: List the OpenShift API server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=openshift-apiserver/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T00-12-19.834.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T00-11-49.835.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T00-13-00.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific OpenShift API server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=openshift-apiserver/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=openshift-apiserver/audit-2021-03-09T00-12-19.834.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"381acf6d-5f30-4c7d-8175-c9c317ae5893","stage":"ResponseComplete","requestURI":"/metrics","verb":"get","user":{"username":"system:serviceaccount:openshift-monitoring:prometheus-k8s","uid":"825b60a0-3976-4861-a342-3b2b561e8f82","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"]},"sourceIPs":["10.129.2.6"],"userAgent":"Prometheus/2.23.0","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-03-08T18:02:04.086545Z","stageTimestamp":"2021-03-08T18:02:04.107102Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"prometheus-k8s\" of ClusterRole \"prometheus-k8s\" to ServiceAccount \"prometheus-k8s/openshift-monitoring\""}} View the Kubernetes API server audit logs: List the Kubernetes API server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=kube-apiserver/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T14-07-27.129.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T19-24-22.620.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T18-37-07.511.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific Kubernetes API server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=kube-apiserver/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=kube-apiserver/audit-2021-03-09T14-07-27.129.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"cfce8a0b-b5f5-4365-8c9f-79c1227d10f9","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa","verb":"get","user":{"username":"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator","uid":"2574b041-f3c8-44e6-a057-baef7aa81516","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-scheduler-operator","system:authenticated"]},"sourceIPs":["10.128.0.8"],"userAgent":"cluster-kube-scheduler-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat","objectRef":{"resource":"serviceaccounts","namespace":"openshift-kube-scheduler","name":"openshift-kube-scheduler-sa","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-03-08T18:06:42.512619Z","stageTimestamp":"2021-03-08T18:06:42.516145Z","annotations":{"authentication.k8s.io/legacy-token":"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator","authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:cluster-kube-scheduler-operator\" of ClusterRole \"cluster-admin\" to ServiceAccount \"openshift-kube-scheduler-operator/openshift-kube-scheduler-operator\""}} View the OpenShift OAuth API server audit logs: List the OpenShift OAuth API server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=oauth-apiserver/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T13-06-26.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T18-23-21.619.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T17-36-06.510.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific OpenShift OAuth API server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=oauth-apiserver/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-apiserver/audit-2021-03-09T13-06-26.128.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"dd4c44e2-3ea1-4830-9ab7-c91a5f1388d6","stage":"ResponseComplete","requestURI":"/apis/user.openshift.io/v1/users/~","verb":"get","user":{"username":"system:serviceaccount:openshift-monitoring:prometheus-k8s","groups":["system:serviceaccounts","system:serviceaccounts:openshift-monitoring","system:authenticated"]},"sourceIPs":["10.0.32.4","10.128.0.1"],"userAgent":"dockerregistry/v0.0.0 (linux/amd64) kubernetes/USDFormat","objectRef":{"resource":"users","name":"~","apiGroup":"user.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2021-03-08T17:47:43.653187Z","stageTimestamp":"2021-03-08T17:47:43.660187Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"basic-users\" of ClusterRole \"basic-user\" to Group \"system:authenticated\""}} View the OpenShift OAuth server audit logs: List the OpenShift OAuth server audit logs that are available for each control plane node: USD oc adm node-logs --role=master --path=oauth-server/ Example output ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2022-05-11T18-57-32.395.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2022-05-11T19-07-07.021.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2022-05-11T19-06-51.844.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log View a specific OpenShift OAuth server audit log by providing the node name and the log name: USD oc adm node-logs <node_name> --path=oauth-server/<log_name> For example: USD oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-server/audit-2022-05-11T18-57-32.395.log Example output {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"13c20345-f33b-4b7d-b3b6-e7793f805621","stage":"ResponseComplete","requestURI":"/login","verb":"post","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["10.128.2.6"],"userAgent":"Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0","responseStatus":{"metadata":{},"code":302},"requestReceivedTimestamp":"2022-05-11T17:31:16.280155Z","stageTimestamp":"2022-05-11T17:31:16.297083Z","annotations":{"authentication.openshift.io/decision":"error","authentication.openshift.io/username":"kubeadmin","authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} The possible values for the authentication.openshift.io/decision annotation are allow , deny , or error . 10.3. Filtering audit logs You can use jq or another JSON parsing tool to filter the API server audit logs. Note The amount of information logged to the API server audit logs is controlled by the audit log policy that is set. The following procedure provides examples of using jq to filter audit logs on control plane node node-1.example.com . See the jq Manual for detailed information on using jq . Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed jq . Procedure Filter OpenShift API server audit logs by user: USD oc adm node-logs node-1.example.com \ --path=openshift-apiserver/audit.log \ | jq 'select(.user.username == "myusername")' Filter OpenShift API server audit logs by user agent: USD oc adm node-logs node-1.example.com \ --path=openshift-apiserver/audit.log \ | jq 'select(.userAgent == "cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat")' Filter Kubernetes API server audit logs by a certain API version and only output the user agent: USD oc adm node-logs node-1.example.com \ --path=kube-apiserver/audit.log \ | jq 'select(.requestURI | startswith("/apis/apiextensions.k8s.io/v1beta1")) | .userAgent' Filter OpenShift OAuth API server audit logs by excluding a verb: USD oc adm node-logs node-1.example.com \ --path=oauth-apiserver/audit.log \ | jq 'select(.verb != "get")' Filter OpenShift OAuth server audit logs by events that identified a username and failed with an error: USD oc adm node-logs node-1.example.com \ --path=oauth-server/audit.log \ | jq 'select(.annotations["authentication.openshift.io/username"] != null and .annotations["authentication.openshift.io/decision"] == "error")' 10.4. Gathering audit logs You can use the must-gather tool to collect the audit logs for debugging your cluster, which you can review or send to Red Hat Support. Procedure Run the oc adm must-gather command with -- /usr/bin/gather_audit_logs : USD oc adm must-gather -- /usr/bin/gather_audit_logs Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1 1 Replace must-gather-local.472290403699006248 with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 10.5. Additional resources Must-gather tool API audit log event structure Configuring the audit log policy About log forwarding | [
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"ad209ce1-fec7-4130-8192-c4cc63f1d8cd\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-controller-manager/configmaps/cert-recovery-controller-lock?timeout=35s\",\"verb\":\"update\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-controller-manager:localhost-recovery-client\",\"uid\":\"dd4997e3-d565-4e37-80f8-7fc122ccd785\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-controller-manager\",\"system:authenticated\"]},\"sourceIPs\":[\"::1\"],\"userAgent\":\"cluster-kube-controller-manager-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"configmaps\",\"namespace\":\"openshift-kube-controller-manager\",\"name\":\"cert-recovery-controller-lock\",\"uid\":\"5c57190b-6993-425d-8101-8337e48c7548\",\"apiVersion\":\"v1\",\"resourceVersion\":\"574307\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2020-04-02T08:27:20.200962Z\",\"stageTimestamp\":\"2020-04-02T08:27:20.206710Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:kube-controller-manager-recovery\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"localhost-recovery-client/openshift-kube-controller-manager\\\"\"}}",
"oc adm node-logs --role=master --path=openshift-apiserver/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T00-12-19.834.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T00-11-49.835.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T00-13-00.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=openshift-apiserver/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=openshift-apiserver/audit-2021-03-09T00-12-19.834.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"381acf6d-5f30-4c7d-8175-c9c317ae5893\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/metrics\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"uid\":\"825b60a0-3976-4861-a342-3b2b561e8f82\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.129.2.6\"],\"userAgent\":\"Prometheus/2.23.0\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:02:04.086545Z\",\"stageTimestamp\":\"2021-03-08T18:02:04.107102Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"prometheus-k8s\\\" of ClusterRole \\\"prometheus-k8s\\\" to ServiceAccount \\\"prometheus-k8s/openshift-monitoring\\\"\"}}",
"oc adm node-logs --role=master --path=kube-apiserver/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T14-07-27.129.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T19-24-22.620.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T18-37-07.511.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=kube-apiserver/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=kube-apiserver/audit-2021-03-09T14-07-27.129.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"cfce8a0b-b5f5-4365-8c9f-79c1227d10f9\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"uid\":\"2574b041-f3c8-44e6-a057-baef7aa81516\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-kube-scheduler-operator\",\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.8\"],\"userAgent\":\"cluster-kube-scheduler-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"serviceaccounts\",\"namespace\":\"openshift-kube-scheduler\",\"name\":\"openshift-kube-scheduler-sa\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T18:06:42.512619Z\",\"stageTimestamp\":\"2021-03-08T18:06:42.516145Z\",\"annotations\":{\"authentication.k8s.io/legacy-token\":\"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:openshift:operator:cluster-kube-scheduler-operator\\\" of ClusterRole \\\"cluster-admin\\\" to ServiceAccount \\\"openshift-kube-scheduler-operator/openshift-kube-scheduler-operator\\\"\"}}",
"oc adm node-logs --role=master --path=oauth-apiserver/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2021-03-09T13-06-26.128.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2021-03-09T18-23-21.619.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2021-03-09T17-36-06.510.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=oauth-apiserver/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-apiserver/audit-2021-03-09T13-06-26.128.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"dd4c44e2-3ea1-4830-9ab7-c91a5f1388d6\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/apis/user.openshift.io/v1/users/~\",\"verb\":\"get\",\"user\":{\"username\":\"system:serviceaccount:openshift-monitoring:prometheus-k8s\",\"groups\":[\"system:serviceaccounts\",\"system:serviceaccounts:openshift-monitoring\",\"system:authenticated\"]},\"sourceIPs\":[\"10.0.32.4\",\"10.128.0.1\"],\"userAgent\":\"dockerregistry/v0.0.0 (linux/amd64) kubernetes/USDFormat\",\"objectRef\":{\"resource\":\"users\",\"name\":\"~\",\"apiGroup\":\"user.openshift.io\",\"apiVersion\":\"v1\"},\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2021-03-08T17:47:43.653187Z\",\"stageTimestamp\":\"2021-03-08T17:47:43.660187Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"basic-users\\\" of ClusterRole \\\"basic-user\\\" to Group \\\"system:authenticated\\\"\"}}",
"oc adm node-logs --role=master --path=oauth-server/",
"ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit-2022-05-11T18-57-32.395.log ci-ln-m0wpfjb-f76d1-vnb5x-master-0 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit-2022-05-11T19-07-07.021.log ci-ln-m0wpfjb-f76d1-vnb5x-master-1 audit.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit-2022-05-11T19-06-51.844.log ci-ln-m0wpfjb-f76d1-vnb5x-master-2 audit.log",
"oc adm node-logs <node_name> --path=oauth-server/<log_name>",
"oc adm node-logs ci-ln-m0wpfjb-f76d1-vnb5x-master-0 --path=oauth-server/audit-2022-05-11T18-57-32.395.log",
"{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"13c20345-f33b-4b7d-b3b6-e7793f805621\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/login\",\"verb\":\"post\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.128.2.6\"],\"userAgent\":\"Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0\",\"responseStatus\":{\"metadata\":{},\"code\":302},\"requestReceivedTimestamp\":\"2022-05-11T17:31:16.280155Z\",\"stageTimestamp\":\"2022-05-11T17:31:16.297083Z\",\"annotations\":{\"authentication.openshift.io/decision\":\"error\",\"authentication.openshift.io/username\":\"kubeadmin\",\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}",
"oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.user.username == \"myusername\")'",
"oc adm node-logs node-1.example.com --path=openshift-apiserver/audit.log | jq 'select(.userAgent == \"cluster-version-operator/v0.0.0 (linux/amd64) kubernetes/USDFormat\")'",
"oc adm node-logs node-1.example.com --path=kube-apiserver/audit.log | jq 'select(.requestURI | startswith(\"/apis/apiextensions.k8s.io/v1beta1\")) | .userAgent'",
"oc adm node-logs node-1.example.com --path=oauth-apiserver/audit.log | jq 'select(.verb != \"get\")'",
"oc adm node-logs node-1.example.com --path=oauth-server/audit.log | jq 'select(.annotations[\"authentication.openshift.io/username\"] != null and .annotations[\"authentication.openshift.io/decision\"] == \"error\")'",
"oc adm must-gather -- /usr/bin/gather_audit_logs",
"tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/security_and_compliance/audit-log-view |
Chapter 10. SSO protocols | Chapter 10. SSO protocols This section discusses authentication protocols, the Red Hat build of Keycloak authentication server and how applications, secured by the Red Hat build of Keycloak authentication server, interact with these protocols. 10.1. OpenID Connect OpenID Connect (OIDC) is an authentication protocol that is an extension of OAuth 2.0 . OAuth 2.0 is a framework for building authorization protocols and is incomplete. OIDC, however, is a full authentication and authorization protocol that uses the Json Web Token (JWT) standards. The JWT standards define an identity token JSON format and methods to digitally sign and encrypt data in a compact and web-friendly way. In general, OIDC implements two use cases. The first case is an application requesting that a Red Hat build of Keycloak server authenticates a user. Upon successful login, the application receives an identity token and an access token . The identity token contains user information including user name, email, and profile information. The realm digitally signs the access token which contains access information (such as user role mappings) that applications use to determine the resources users can access in the application. The second use case is a client accessing remote services. The client requests an access token from Red Hat build of Keycloak to invoke on remote services on behalf of the user. Red Hat build of Keycloak authenticates the user and asks the user for consent to grant access to the requesting client. The client receives the access token which is digitally signed by the realm. The client makes REST requests on remote services using the access token . The remote REST service extracts the access token . The remote REST service verifies the tokens signature. The remote REST service decides, based on access information within the token, to process or reject the request. 10.1.1. OIDC auth flows OIDC has several methods, or flows, that clients or applications can use to authenticate users and receive identity and access tokens. The method depends on the type of application or client requesting access. 10.1.1.1. Authorization Code Flow The Authorization Code Flow is a browser-based protocol and suits authenticating and authorizing browser-based applications. It uses browser redirects to obtain identity and access tokens. A user connects to an application using a browser. The application detects the user is not logged into the application. The application redirects the browser to Red Hat build of Keycloak for authentication. The application passes a callback URL as a query parameter in the browser redirect. Red Hat build of Keycloak uses the parameter upon successful authentication. Red Hat build of Keycloak authenticates the user and creates a one-time, short-lived, temporary code. Red Hat build of Keycloak redirects to the application using the callback URL and adds the temporary code as a query parameter in the callback URL. The application extracts the temporary code and makes a background REST invocation to Red Hat build of Keycloak to exchange the code for an identity and access and refresh token. To prevent replay attacks, the temporary code cannot be used more than once. Note A system is vulnerable to a stolen token for the lifetime of that token. For security and scalability reasons, access tokens are generally set to expire quickly so subsequent token requests fail. If a token expires, an application can obtain a new access token using the additional refresh token sent by the login protocol. Confidential clients provide client secrets when they exchange the temporary codes for tokens. Public clients are not required to provide client secrets. Public clients are secure when HTTPS is strictly enforced and redirect URIs registered for the client are strictly controlled. HTML5/JavaScript clients have to be public clients because there is no way to securely transmit the client secret to HTML5/JavaScript clients. For more details, see the Managing Clients chapter. Red Hat build of Keycloak also supports the Proof Key for Code Exchange specification. 10.1.1.2. Implicit Flow The Implicit Flow is a browser-based protocol. It is similar to the Authorization Code Flow but with fewer requests and no refresh tokens. Note The possibility exists of access tokens leaking in the browser history when tokens are transmitted via redirect URIs (see below). Also, this flow does not provide clients with refresh tokens. Therefore, access tokens have to be long-lived or users have to re-authenticate when they expire. We do not advise using this flow. This flow is supported because it is in the OIDC and OAuth 2.0 specification. The protocol works as follows: A user connects to an application using a browser. The application detects the user is not logged into the application. The application redirects the browser to Red Hat build of Keycloak for authentication. The application passes a callback URL as a query parameter in the browser redirect. Red Hat build of Keycloak uses the query parameter upon successful authentication. Red Hat build of Keycloak authenticates the user and creates an identity and access token. Red Hat build of Keycloak redirects to the application using the callback URL and additionally adds the identity and access tokens as a query parameter in the callback URL. The application extracts the identity and access tokens from the callback URL. 10.1.1.3. Resource owner password credentials grant (Direct Access Grants) Direct Access Grants are used by REST clients to obtain tokens on behalf of users. It is a HTTP POST request that contains: The credentials of the user. The credentials are sent within form parameters. The id of the client. The clients secret (if it is a confidential client). The HTTP response contains the identity , access , and refresh tokens. 10.1.1.4. Client credentials grant The Client Credentials Grant creates a token based on the metadata and permissions of a service account associated with the client instead of obtaining a token that works on behalf of an external user. Client Credentials Grants are used by REST clients. See the Service Accounts chapter for more information. 10.1.2. Refresh token grant By default, Red Hat build of Keycloak returns refresh tokens in the token responses from most of the flows. Some exceptions are implicit flow or client credentials grant described above. Refresh token is tied to the user session of the SSO browser session and can be valid for the lifetime of the user session. However, that client should send a refresh-token request at least once per specified interval. Otherwise, the session can be considered "idle" and can expire. See the timeouts section for more information. Red Hat build of Keycloak supports offline tokens , which can be used typically when client needs to use refresh token even if corresponding browser SSO session is already expired. 10.1.2.1. Refresh token rotation It is possible to specify that the refresh token is considered invalid once it is used. This means that client must always save the refresh token from the last refresh response because older refresh tokens, which were already used, would not be considered valid anymore by Red Hat build of Keycloak. This is possible to set with the use of Revoke Refresh token option as specified in the timeouts section . Red Hat build of Keycloak also supports the situation that no refresh token rotation exists. In this case, a refresh token is returned during login, but subsequent responses from refresh-token requests will not return new refresh tokens. This practice is recommended for instance in the FAPI 2 draft specification . In Red Hat build of Keycloak, it is possible to skip refresh token rotation with the use of client policies . You can add executor suppress-refresh-token-rotation to some client profile and configure client policy to specify for which clients would be the profile triggered, which means that for those clients the refresh token rotation is going to be skipped. 10.1.2.2. Device authorization grant This is used by clients running on internet-connected devices that have limited input capabilities or lack a suitable browser. Here's a brief summary of the protocol: The application requests Red Hat build of Keycloak a device code and a user code. Red Hat build of Keycloak creates a device code and a user code. Red Hat build of Keycloak returns a response including the device code and the user code to the application. The application provides the user with the user code and the verification URI. The user accesses a verification URI to be authenticated by using another browser. You could define a short verification_uri that will be redirected to Red Hat build of Keycloak verification URI (/realms/realm_name/device)outside Red Hat build of Keycloak - fe in a proxy. The application repeatedly polls Red Hat build of Keycloak to find out if the user completed the user authorization. If user authentication is complete, the application exchanges the device code for an identity , access and refresh token. 10.1.2.3. Client initiated backchannel authentication grant This feature is used by clients who want to initiate the authentication flow by communicating with the OpenID Provider directly without redirect through the user's browser like OAuth 2.0's authorization code grant. Here's a brief summary of the protocol: The client requests Red Hat build of Keycloak an auth_req_id that identifies the authentication request made by the client. Red Hat build of Keycloak creates the auth_req_id. After receiving this auth_req_id, this client repeatedly needs to poll Red Hat build of Keycloak to obtain an Access Token, Refresh Token and ID Token from Red Hat build of Keycloak in return for the auth_req_id until the user is authenticated. An administrator can configure Client Initiated Backchannel Authentication (CIBA) related operations as CIBA Policy per realm. Also please refer to other places of Red Hat build of Keycloak documentation like Backchannel Authentication Endpoint section of Securing Applications and Services Guide and Client Initiated Backchannel Authentication Grant section of Securing Applications and Services Guide. 10.1.2.3.1. CIBA Policy An administrator carries out the following operations on the Admin Console : Open the Authentication CIBA Policy tab. Configure items and click Save . The configurable items and their description follow. Configuration Description Backchannel Token Delivery Mode Specifying how the CD (Consumption Device) gets the authentication result and related tokens. There are three modes, "poll", "ping" and "push". Red Hat build of Keycloak only supports "poll". The default setting is "poll". This configuration is required. For more details, see CIBA Specification . Expires In The expiration time of the "auth_req_id" in seconds since the authentication request was received. The default setting is 120. This configuration is required. For more details, see CIBA Specification . Interval The interval in seconds the CD (Consumption Device) needs to wait for between polling requests to the token endpoint. The default setting is 5. This configuration is optional. For more details, see CIBA Specification . Authentication Requested User Hint The way of identifying the end-user for whom authentication is being requested. The default setting is "login_hint". There are three modes, "login_hint", "login_hint_token" and "id_token_hint". Red Hat build of Keycloak only supports "login_hint". This configuration is required. For more details, see CIBA Specification . 10.1.2.3.2. Provider Setting The CIBA grant uses the following two providers. Authentication Channel Provider : provides the communication between Red Hat build of Keycloak and the entity that actually authenticates the user via AD (Authentication Device). User Resolver Provider : get UserModel of Red Hat build of Keycloak from the information provided by the client to identify the user. Red Hat build of Keycloak has both default providers. However, the administrator needs to set up Authentication Channel Provider like this: kc.[sh|bat] start --spi-ciba-auth-channel-ciba-http-auth-channel-http-authentication-channel-uri=https://backend.internal.example.com The configurable items and their description follow. Configuration Description http-authentication-channel-uri Specifying URI of the entity that actually authenticates the user via AD (Authentication Device). 10.1.2.3.3. Authentication Channel Provider CIBA standard document does not specify how to authenticate the user by AD. Therefore, it might be implemented at the discretion of products. Red Hat build of Keycloak delegates this authentication to an external authentication entity. To communicate with the authentication entity, Red Hat build of Keycloak provides Authentication Channel Provider. Its implementation of Red Hat build of Keycloak assumes that the authentication entity is under the control of the administrator of Red Hat build of Keycloak so that Red Hat build of Keycloak trusts the authentication entity. It is not recommended to use the authentication entity that the administrator of Red Hat build of Keycloak cannot control. Authentication Channel Provider is provided as SPI provider so that users of Red Hat build of Keycloak can implement their own provider in order to meet their environment. Red Hat build of Keycloak provides its default provider called HTTP Authentication Channel Provider that uses HTTP to communicate with the authentication entity. If a user of Red Hat build of Keycloak user want to use the HTTP Authentication Channel Provider, they need to know its contract between Red Hat build of Keycloak and the authentication entity consisting of the following two parts. Authentication Delegation Request/Response Red Hat build of Keycloak sends an authentication request to the authentication entity. Authentication Result Notification/ACK The authentication entity notifies the result of the authentication to Red Hat build of Keycloak. Authentication Delegation Request/Response consists of the following messaging. Authentication Delegation Request The request is sent from Red Hat build of Keycloak to the authentication entity to ask it for user authentication by AD. Headers Name Value Description Content-Type application/json The message body is json formatted. Authorization Bearer [token] The [token] is used when the authentication entity notifies the result of the authentication to Red Hat build of Keycloak. Parameters Type Name Description Path delegation_reception The endpoint provided by the authentication entity to receive the delegation request Body Name Description login_hint It tells the authentication entity who is authenticated by AD. By default, it is the user's "username". This field is required and was defined by CIBA standard document. scope It tells which scopes the authentication entity gets consent from the authenticated user. This field is required and was defined by CIBA standard document. is_consent_required It shows whether the authentication entity needs to get consent from the authenticated user about the scope. This field is required. binding_message Its value is intended to be shown in both CD and AD's UI to make the user recognize that the authentication by AD is triggered by CD. This field is optional and was defined by CIBA standard document. acr_values It tells the requesting Authentication Context Class Reference from CD. This field is optional and was defined by CIBA standard document. Authentication Delegation Response The response is returned from the authentication entity to Red Hat build of Keycloak to notify that the authentication entity received the authentication request from Red Hat build of Keycloak. Responses HTTP Status Code Description 201 It notifies Red Hat build of Keycloak of receiving the authentication delegation request. Authentication Result Notification/ACK consists of the following messaging. Authentication Result Notification The authentication entity sends the result of the authentication request to Red Hat build of Keycloak. Headers Name Value Description Content-Type application/json The message body is json formatted. Authorization Bearer [token] The [token] must be the one the authentication entity has received from Red Hat build of Keycloak in Authentication Delegation Request. Parameters Type Name Description Path realm The realm name Body Name Description status It tells the result of user authentication by AD. It must be one of the following status. SUCCEED : The authentication by AD has been successfully completed. UNAUTHORIZED : The authentication by AD has not been completed. CANCELLED : The authentication by AD has been cancelled by the user. Authentication Result ACK The response is returned from Red Hat build of Keycloak to the authentication entity to notify Red Hat build of Keycloak received the result of user authentication by AD from the authentication entity. Responses HTTP Status Code Description 200 It notifies the authentication entity of receiving the notification of the authentication result. 10.1.2.3.4. User Resolver Provider Even if the same user, its representation may differ in each CD, Red Hat build of Keycloak and the authentication entity. For CD, Red Hat build of Keycloak and the authentication entity to recognize the same user, this User Resolver Provider converts their own user representations among them. User Resolver Provider is provided as SPI provider so that users of Red Hat build of Keycloak can implement their own provider in order to meet their environment. Red Hat build of Keycloak provides its default provider called Default User Resolver Provider that has the following characteristics. Only support login_hint parameter and is used as default. username of UserModel in Red Hat build of Keycloak is used to represent the user on CD, Red Hat build of Keycloak and the authentication entity. 10.1.3. OIDC Logout OIDC has four specifications relevant to logout mechanisms: Session Management RP-Initiated Logout Front-Channel Logout Back-Channel Logout Again since all of this is described in the OIDC specification we will only give a brief overview here. 10.1.3.1. Session Management This is a browser-based logout. The application obtains session status information from Red Hat build of Keycloak at a regular basis. When the session is terminated at Red Hat build of Keycloak the application will notice and trigger its own logout. 10.1.3.2. RP-Initiated Logout This is also a browser-based logout where the logout starts by redirecting the user to a specific endpoint at Red Hat build of Keycloak. This redirect usually happens when the user clicks the Log Out link on the page of some application, which previously used Red Hat build of Keycloak to authenticate the user. Once the user is redirected to the logout endpoint, Red Hat build of Keycloak is going to send logout requests to clients to let them invalidate their local user sessions, and potentially redirect the user to some URL once the logout process is finished. The user might be optionally requested to confirm the logout in case the id_token_hint parameter was not used. After logout, the user is automatically redirected to the specified post_logout_redirect_uri as long as it is provided as a parameter. Note that you need to include either the client_id or id_token_hint parameter in case the post_logout_redirect_uri is included. Also the post_logout_redirect_uri parameter needs to match one of the Valid Post Logout Redirect URIs specified in the client configuration. Depending on the client configuration, logout requests can be sent to clients through the front-channel or through the back-channel. For the frontend browser clients, which rely on the Session Management described in the section, Red Hat build of Keycloak does not need to send any logout requests to them; these clients automatically detect that SSO session in the browser is logged out. 10.1.3.3. Front-channel Logout To configure clients to receive logout requests through the front-channel, look at the Front-Channel Logout client setting. When using this method, consider the following: Logout requests sent by Red Hat build of Keycloak to clients rely on the browser and on embedded iframes that are rendered for the logout page. By being based on iframes , front-channel logout might be impacted by Content Security Policies (CSP) and logout requests might be blocked. If the user closes the browser prior to rendering the logout page or before logout requests are actually sent to clients, their sessions at the client might not be invalidated. Note Consider using Back-Channel Logout as it provides a more reliable and secure approach to log out users and terminate their sessions on the clients. If the client is not enabled with front-channel logout, then Red Hat build of Keycloak is going to try first to send logout requests through the back-channel using the Back-Channel Logout URL . If not defined, the server is going to fall back to using the Admin URL . 10.1.3.4. Backchannel Logout This is a non-browser-based logout that uses direct backchannel communication between Red Hat build of Keycloak and clients. Red Hat build of Keycloak sends a HTTP POST request containing a logout token to all clients logged into Red Hat build of Keycloak. These requests are sent to a registered backchannel logout URLs at Red Hat build of Keycloak and are supposed to trigger a logout at client side. 10.1.4. Red Hat build of Keycloak server OIDC URI endpoints The following is a list of OIDC endpoints that Red Hat build of Keycloak publishes. These endpoints can be used when a non-Red Hat build of Keycloak client adapter uses OIDC to communicate with the authentication server. They are all relative URLs. The root of the URL consists of the HTTP(S) protocol, hostname, and optionally the path: For example /realms/{realm-name}/protocol/openid-connect/auth Used for obtaining a temporary code in the Authorization Code Flow or obtaining tokens using the Implicit Flow, Direct Grants, or Client Grants. /realms/{realm-name}/protocol/openid-connect/token Used by the Authorization Code Flow to convert a temporary code into a token. /realms/{realm-name}/protocol/openid-connect/logout Used for performing logouts. /realms/{realm-name}/protocol/openid-connect/userinfo Used for the User Info service described in the OIDC specification. /realms/{realm-name}/protocol/openid-connect/revoke Used for OAuth 2.0 Token Revocation described in RFC7009 . /realms/{realm-name}/protocol/openid-connect/certs Used for the JSON Web Key Set (JWKS) containing the public keys used to verify any JSON Web Token (jwks_uri) /realms/{realm-name}/protocol/openid-connect/auth/device Used for Device Authorization Grant to obtain a device code and a user code. /realms/{realm-name}/protocol/openid-connect/ext/ciba/auth This is the URL endpoint for Client Initiated Backchannel Authentication Grant to obtain an auth_req_id that identifies the authentication request made by the client. /realms/{realm-name}/protocol/openid-connect/logout/backchannel-logout This is the URL endpoint for performing backchannel logouts described in the OIDC specification. In all of these, replace {realm-name} with the name of the realm. 10.2. SAML SAML 2.0 is a similar specification to OIDC but more mature. It is descended from SOAP and web service messaging specifications so is generally more verbose than OIDC. SAML 2.0 is an authentication protocol that exchanges XML documents between authentication servers and applications. XML signatures and encryption are used to verify requests and responses. In general, SAML implements two use cases. The first use case is an application that requests the Red Hat build of Keycloak server authenticates a user. Upon successful login, the application will receive an XML document. This document contains an SAML assertion that specifies user attributes. The realm digitally signs the document which contains access information (such as user role mappings) that applications use to determine the resources users are allowed to access in the application. The second use case is a client accessing remote services. The client requests a SAML assertion from Red Hat build of Keycloak to invoke on remote services on behalf of the user. 10.2.1. SAML bindings Red Hat build of Keycloak supports three binding types. 10.2.1.1. Redirect binding Redirect binding uses a series of browser redirect URIs to exchange information. A user connects to an application using a browser. The application detects the user is not authenticated. The application generates an XML authentication request document and encodes it as a query parameter in a URI. The URI is used to redirect to the Red Hat build of Keycloak server. Depending on your settings, the application can also digitally sign the XML document and include the signature as a query parameter in the redirect URI to Red Hat build of Keycloak. This signature is used to validate the client that sends the request. The browser redirects to Red Hat build of Keycloak. The server extracts the XML auth request document and verifies the digital signature, if required. The user enters their authentication credentials. After authentication, the server generates an XML authentication response document. The document contains a SAML assertion that holds metadata about the user, including name, address, email, and any role mappings the user has. The document is usually digitally signed using XML signatures, and may also be encrypted. The XML authentication response document is encoded as a query parameter in a redirect URI. The URI brings the browser back to the application. The digital signature is also included as a query parameter. The application receives the redirect URI and extracts the XML document. The application verifies the realm's signature to ensure it is receiving a valid authentication response. The information inside the SAML assertion is used to make access decisions or display user data. 10.2.1.2. POST binding POST binding is similar to Redirect binding but POST binding exchanges XML documents using POST requests instead of using GET requests. POST Binding uses JavaScript to make the browser send a POST request to the Red Hat build of Keycloak server or application when exchanging documents. HTTP responds with an HTML document which contains an HTML form containing embedded JavaScript. When the page loads, the JavaScript automatically invokes the form. POST binding is recommended due to two restrictions: Security - With Redirect binding, the SAML response is part of the URL. It is less secure as it is possible to capture the response in logs. Size - Sending the document in the HTTP payload provides more scope for large amounts of data than in a limited URL. 10.2.1.3. ECP Enhanced Client or Proxy (ECP) is a SAML v.2.0 profile which allows the exchange of SAML attributes outside the context of a web browser. It is often used by REST or SOAP-based clients. 10.2.2. Red Hat build of Keycloak Server SAML URI Endpoints Red Hat build of Keycloak has one endpoint for all SAML requests. http(s)://authserver.host/realms/{realm-name}/protocol/saml All bindings use this endpoint. 10.3. OpenID Connect compared to SAML The following lists a number of factors to consider when choosing a protocol. For most purposes, Red Hat build of Keycloak recommends using OIDC. OIDC OIDC is specifically designed to work with the web. OIDC is suited for HTML5/JavaScript applications because it is easier to implement on the client side than SAML. OIDC tokens are in the JSON format which makes them easier for Javascript to consume. OIDC has features to make security implementation easier. For example, see the iframe trick that the specification uses to determine a users login status. SAML SAML is designed as a layer to work on top of the web. SAML can be more verbose than OIDC. Users pick SAML over OIDC because there is a perception that it is mature. Users pick SAML over OIDC existing applications that are secured with it. 10.4. Docker registry v2 authentication Note Docker authentication is disabled by default. To enable docker authentication, see the Enabling and disabling features chapter. Docker Registry V2 Authentication is a protocol, similar to OIDC, that authenticates users against Docker registries. Red Hat build of Keycloak's implementation of this protocol lets Docker clients use a Red Hat build of Keycloak authentication server authenticate against a registry. This protocol uses standard token and signature mechanisms but it does deviate from a true OIDC implementation. It deviates by using a very specific JSON format for requests and responses as well as mapping repository names and permissions to the OAuth scope mechanism. 10.4.1. Docker authentication flow The authentication flow is described in the Docker API documentation . The following is a summary from the perspective of the Red Hat build of Keycloak authentication server: Perform a docker login . The Docker client requests a resource from the Docker registry. If the resource is protected and no authentication token is in the request, the Docker registry server responds with a 401 HTTP message with some information on the permissions that are required and the location of the authorization server. The Docker client constructs an authentication request based on the 401 HTTP message from the Docker registry. The client uses the locally cached credentials (from the docker login command) as part of the HTTP Basic Authentication request to the Red Hat build of Keycloak authentication server. The Red Hat build of Keycloak authentication server attempts to authenticate the user and return a JSON body containing an OAuth-style Bearer token. The Docker client receives a bearer token from the JSON response and uses it in the authorization header to request the protected resource. The Docker registry receives the new request for the protected resource with the token from the Red Hat build of Keycloak server. The registry validates the token and grants access to the requested resource (if appropriate). Note Red Hat build of Keycloak does not create a browser SSO session after successful authentication with the Docker protocol. The browser SSO session does not use the Docker protocol as it cannot refresh tokens or obtain the status of a token or session from the Red Hat build of Keycloak server; therefore a browser SSO session is not necessary. For more details, see the transient session section. 10.4.2. Red Hat build of Keycloak Docker Registry v2 Authentication Server URI Endpoints Red Hat build of Keycloak has one endpoint for all Docker auth v2 requests. http(s)://authserver.host/realms/{realm-name}/protocol/docker-v2 | [
"kc.[sh|bat] start --spi-ciba-auth-channel-ciba-http-auth-channel-http-authentication-channel-uri=https://backend.internal.example.com",
"POST [delegation_reception]",
"POST /realms/[realm]/protocol/openid-connect/ext/ciba/auth/callback",
"https://localhost:8080"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/server_administration_guide/sso_protocols |
Chapter 4. Updating Feature packs to your JBoss EAP installation using the jboss-eap-installation-manager | Chapter 4. Updating Feature packs to your JBoss EAP installation using the jboss-eap-installation-manager 4.1. Updating Feature Packs from your JBoss EAP installation You can use the jboss-eap-installation-manager to update Feature Packs on your JBoss EAP server installation. Prerequisites The jboss-eap-installation-manager is present on your system. Your JBoss EAP installation has a feature pack installed on it. Procedure Stop the JBoss EAP server. Open the terminal emulator and navigate to the directory containing the downloaded jboss-eap-installation-manager . Update the feature packs on the server: USD ./jboss-eap-installation-manager.sh update perform --dir jboss-eap8 4.2. Updating feature packs on an offline JBoss EAP server You can use the jboss-eap-installation-manager to update Feature Packs on your JBoss EAP server installation offline. Prerequisites You have downloaded and extracted the latest JBoss EAP 8.0 repository. If required, you have downloaded the latest feature pack repository. You have added Feature Packs to your JBoss EAP installation. Procedure Stop the JBoss EAP server. Open the terminal emulator and navigate to the directory containing the downloaded jboss-eap-installation-manager . Update the feature packs on the server: USD ./jboss-eap-installation-manager.sh update perform --dir jboss-eap8 --repositories <EAP8_OFFLINE_REPO_PATH>,<FEATURE_PACK_OFFLINE_REPO> 4.3. Updating additional artifacts You can use the jboss-eap-installation-manager to update additional artifacts in your JBoss EAP installation. Note MyFaces artifacts are not provided or supported by Red Hat. All other channels outside of JBoss EAP channels are not supported. Prerequisite You have an account on the Red Hat Customer Portal and are logged in. You have reviewed the supported configurations for JBoss EAP 8.0. You have installed a supported JDK. You have downloaded the jboss-eap-installation-manager . Procedure Open the terminal emulator and navigate to the directory containing jboss-eap-installation-manager . Update the subscribed custom channels in the manifest.yaml file with the new version of myfaces artifacts: schemaVersion: 1.0.0 name: MyFaces manifest file streams: - groupId: org.apache.myfaces.core artifactId: myfaces-impl version: 4.0.1 - groupId: org.apache.myfaces.core artifactId: myfaces-api version: 4.0.1 Deploy the newly updated manifest: mvn deploy:deploy-file -Dfile=manifest.yaml \ -DgroupId=com.example.channels -DartifactId=myfaces \ -Dclassifier=manifest -Dpackaging=yaml -Dversion=1.0.1 \ -Durl=file:/path/to/local/repository Stop the JBoss EAP server. Update the artifacts: USD ./jboss-eap-installation-manager.sh update perform --dir jboss-eap8 Updating server: /tmp/jboss/jboss-eap-8.0 Updates found: org.apache.myfaces.core:myfaces-api 4.0.0 ==> 4.0.1 org.apache.myfaces.core:myfaces-impl 4.0.0 ==> 4.0.1 Continue with update [y/N]: y Building updates Feature-packs resolved. Packages installed. Downloaded artifacts. JBoss modules installed. Configurations generated. JBoss examples installed. Build update complete! Applying updates Update complete! Operation completed in 21.48 seconds. | [
"./jboss-eap-installation-manager.sh update perform --dir jboss-eap8",
"./jboss-eap-installation-manager.sh update perform --dir jboss-eap8 --repositories <EAP8_OFFLINE_REPO_PATH>,<FEATURE_PACK_OFFLINE_REPO>",
"schemaVersion: 1.0.0 name: MyFaces manifest file streams: - groupId: org.apache.myfaces.core artifactId: myfaces-impl version: 4.0.1 - groupId: org.apache.myfaces.core artifactId: myfaces-api version: 4.0.1",
"mvn deploy:deploy-file -Dfile=manifest.yaml -DgroupId=com.example.channels -DartifactId=myfaces -Dclassifier=manifest -Dpackaging=yaml -Dversion=1.0.1 -Durl=file:/path/to/local/repository",
"./jboss-eap-installation-manager.sh update perform --dir jboss-eap8 Updating server: /tmp/jboss/jboss-eap-8.0 Updates found: org.apache.myfaces.core:myfaces-api 4.0.0 ==> 4.0.1 org.apache.myfaces.core:myfaces-impl 4.0.0 ==> 4.0.1 Continue with update [y/N]: y Building updates Feature-packs resolved. Packages installed. Downloaded artifacts. JBoss modules installed. Configurations generated. JBoss examples installed. Build update complete! Applying updates Update complete! Operation completed in 21.48 seconds."
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/updating_red_hat_jboss_enterprise_application_platform/updating-feature-packs-to-your-jboss-eap-installation-using-the-jboss-eap-installation-manager_default |
Subsets and Splits