title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
9.8. Import Metadata From Text File
9.8. Import Metadata From Text File 9.8.1. Import Metadata From Text File The Teiid Designer provides various import options for parsing comma delimited text file metadata into models. This is accomplished via the Import > Teiid Designer > Designer Text File >> Source or View Models option. In Teiid Designer , Click File > Import... action Select the import option Teiid Designer > Designer Text File >> Source or View Models and click > . Select an import type from the drop-down menu shown below. Figure 9.29. Import Wizard These steps are required for each type are defined below: Relation Model Text Import Relational Table Text Import Virtual Table Text Import 9.8.2. Import Relational Model (XML Format) To create relational tables from imported XML text file metadata, perform steps 1 to 3 from the Import Metadata from Text File section and then perform following steps: Select the Relational Model (XML Format) import type, then click > . Figure 9.30. Select Import Type - Relational Model (XML Format) On the page, select the XML file on your local file system via the Browse... button. Select a target model to which the imported relational objects will be added via the second Browse... button. The dialog allows selecting an existing relational model or creating a new model. Note the contents of your selected XML file will be display in the File Contents viewer. Click Finish . Figure 9.31. Select Source Text File and Target Relational Model Page If the target model contains named children (tables, views, procedures) that conflict with the objects being imported, a dialog will be displayed giving you options on how to proceed including: replacing specific existing objects, creating new same named objects or cancel import entirely. Figure 9.32. Duplicate Objects Dialog 9.8.3. Import Relational Tables (CSV Format) To create relational tables from imported text file metadata, perform steps 1 to 3 from the Import Metadata from Text File section and then perform following steps: Select the Relational Tables (CSV Format) import type, then click > . Figure 9.33. Select Import Type - Relational Tables (CSV Format) In the page, you'll need to provide a source text file containing the metadata formatted to the specifications on the page. Figure 9.34. Select Source Text File and Target Relational Model Select an existing relational model as the target location for your new relational components using the Browse... button to open the Relational Model Selector dialog. Select a relational model from your workspace or specify a unique name to create a new model. Select any additional options and click Finish . 9.8.4. Import Relational View Tables (CSV Format) To create relational virtual tables from imported text file metadata, perform steps 1 to 3 from the Import Metadata from Text File section and then perform following steps: Select the Relational Virtual Tables (CSV Format) import type, then click > . Figure 9.35. Select Import Type - Relational Virtual Tables (CSV Format) In the page, you'll need to provide a source text file containing the metadata formatted to the specifications on the page. Figure 9.36. Select Source Text File and Target Virtual Relational Model Select an existing relational virtual model as the target location for your new model components using the Browse... button to open the Virtual Model Selector dialog. Select a virtual relational model from your workspace or specify a unique name to create a new model. Click Finish .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/sect-import_metadata_from_text_file
Chapter 4. Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y
Chapter 4. Updating Red Hat OpenShift Data Foundation 4.15.x to 4.15.y This chapter helps you to upgrade between the z-stream release for all Red Hat OpenShift Data Foundation deployments (Internal, Internal-Attached and External). The upgrade process remains the same for all deployments. The Only difference is what gets upgraded and what's not. For Internal and Internal-attached deployments, upgrading OpenShift Data Foundation upgrades all OpenShift Data Foundation services including the backend Red Hat Ceph Storage (RHCS) cluster. For External mode deployments, upgrading OpenShift Data Foundation only upgrades the OpenShift Data Foundation service while the backend Ceph storage cluster remains untouched and needs to be upgraded separately. Hence, we recommend upgrading RHCS along with OpenShift Data Foundation in order to get new feature support, security fixes, and other bug fixes. Since we do not have a strong dependency on RHCS upgrade, you can upgrade the OpenShift Data Foundation operator first followed by RHCS upgrade or vice-versa. See solution to know more about RHCS releases. When a new z-stream release becomes available, the upgrade process triggers automatically if the update strategy was set to Automatic . If the update strategy is set to Manual then use the following procedure. Prerequisites Ensure that the OpenShift Container Platform cluster has been updated to the latest stable release of version 4.15.X, see Updating Clusters . Ensure that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy. Ensure that all OpenShift Data Foundation Pods, including the operator pods, are in Running state in the openshift-storage namespace. To view the state of the pods, on the OpenShift Web Console, click Workloads Pods . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Ensure that you have sufficient time to complete the OpenShift Data Foundation update process, as the update time varies depending on the number of OSDs that run in the cluster. Procedure On the OpenShift Web Console, navigate to Operators Installed Operators . Select openshift-storage project. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click the OpenShift Data Foundation operator name. Click the Subscription tab. If the Upgrade Status shows require approval , click on requires approval link. On the InstallPlan Details page, click Preview Install Plan . Review the install plan and click Approve . Wait for the Status to change from Unknown to Created . After the operator is successfully upgraded, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. Verification steps Check the Version below the OpenShift Data Foundation name and check the operator status. Navigate to Operators Installed Operators and select the openshift-storage project. When the upgrade completes, the version updates to a new version number for OpenShift Data Foundation and status changes to Succeeded with a green tick. Verify that the OpenShift Data Foundation cluster is healthy and data is resilient. Navigate to Storage Data Foundation Storage Systems tab and then click on the storage system name. Check for the green tick on the status card of Overview - Block and File and Object tabs. Green tick indicates that the storage cluster, object service and data resiliency is healthy If verification steps fail, contact Red Hat Support .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/updating_openshift_data_foundation/updating-zstream-odf_rhodf
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/deploying_the_red_hat_ansible_automation_platform_operator_on_openshift_container_platform/providing-feedback
3.6. Suspending an XFS File System
3.6. Suspending an XFS File System To suspend or resume write activity to a file system, use the following command: Suspending write activity allows hardware-based device snapshots to be used to capture the file system in a consistent state. Note The xfs_freeze utility is provided by the xfsprogs package, which is only available on x86_64. To suspend (that is, freeze) an XFS file system, use: To unfreeze an XFS file system, use: When taking an LVM snapshot, it is not necessary to use xfs_freeze to suspend the file system first. Rather, the LVM management tools will automatically suspend the XFS file system before taking the snapshot. For more information about freezing and unfreezing an XFS file system, see man xfs_freeze .
[ "xfs_freeze mount-point", "xfs_freeze -f /mount/point", "xfs_freeze -u /mount/point" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/xfsfreeze
Preface
Preface This feature is available in this release as a Technology Preview , and therefore is not fully supported by Red Hat. It should only be used for testing, and should not be deployed in a production environment. For more information about Technology Preview features, see Scope of Coverage Details .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/quick_start_guide/pr01
Chapter 2. Important Changes to External Kernel Parameters
Chapter 2. Important Changes to External Kernel Parameters This chapter provides system administrators with a summary of significant changes in the kernel shipped with Red Hat Enterprise Linux 6.8. These changes include added or updated proc entries, sysctl , and sysfs default values, boot parameters, kernel configuration options, or any noticeable behavior changes. force_hrtimer_reprogram [KNL] Force the reprogramming of expired timers in the hrtimer_reprogram() function. softirq_2ms_loop [KNL] Set softirq handling to 2 ms maximum. The default time is the existing Red Hat Enterprise Linux 6 behaviour. tpm_suspend_pcr=[HW,TPM] Specify that, at suspend time, the tpm driver should extend the specified principal components regression ( PCR ) with zeros as a workaround for some chips which fail to flush the last written PCR on a TPM_SaveState operation. This guarantees that all the other PCR s are saved. Format: integer pcr id /proc/fs/fscache/stats Table 2.1. class Ops: new: ini=N Number of async ops initialised changed: rel=N will be equal to ini=N when idle Table 2.2. new class CacheEv nsp=N Number of object lookups or creations rejected due to a lack of space stl=N Number of stale objects deleted rtr=N Number of objects retired when relinquished cul=N Number of objects culled /proc/sys/net/core/default_qdisc The default queuing discipline to use for network devices. This allows overriding the default queue discipline of pfifo_fast with an alternative. Since the default queuing discipline is created with no additional parameters, it is best suited to queuing disciplines that work well without configuration, for example, a stochastic fair queue ( sfq ). Do not use queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin, which require setting up classes and bandwidths. Default: pfifo_fast /sys/kernel/mm/ksm/max_page_sharing Maximum sharing allowed for each KSM page. This enforces a deduplication limit to avoid the virtual memory rmap lists to grow too large. The minimum value is 2 as a newly created KSM page will have at least two sharers. The rmap walk has O(N) complexity where N is the number of rmap_items , that is virtual mappings that are sharing the page, which is in turn capped by max_page_sharing . So this effectively spreads the linear O(N) computational complexity from rmap walk context over different KSM pages. The ksmd walk over the stable_node chains is also O(N), but N is the number of stable_node dups , not the number of rmap_items , so it has not a significant impact on ksmd performance. In practice the best stable_node dups candidate is kept and found at the head of the dups list. The higher this value the faster KSM merges the memory, because there will be fewer stable_node dups queued into the stable_node chain->hlist to check for pruning. And the higher the deduplication factor is, but the slowest the worst case rmap walk could be for any given KSM page. Slowing down the rmap walk means there will be higher latency for certain virtual memory operations happening during swapping, compaction, NUMA balancing, and page migration, in turn decreasing responsiveness for the caller of those virtual memory operations. The scheduler latency of other tasks not involved with the VM operations doing the rmap walk is not affected by this parameter as the rmap walks are always scheduled friendly themselves. /proc/sys/net/core/default_qdisc The default queuing discipline to use for network devices. This allows overriding the default queue discipline of pfifo_fast with an alternative. Since the default queuing discipline is created with no additional parameters so is best suited to queuing disciplines that work well without configuration, for example, a stochastic fair queue ( sfq ). Do not use queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin which require setting up classes and bandwidths. Default: pfifo_fast /sys/kernel/mm/ksm/stable_node_chains_prune_millisecs How frequently to walk the whole list of stable_node "dups" linked in the stable_node chains in order to prune stale stable_node . Smaller milllisecs values will free up the KSM metadata with lower latency, but they will make ksmd use more CPU during the scan. This only applies to the stable_node chains so it is a noop unless a single KSM page hits max_page_sharing . In such a case there are no stable_node chains. /sys/kernel/mm/ksm/stable_node_chains Number of stable node chains allocated. this is effectively the number of KSM pages that hit the max_page_sharing limit. /sys/kernel/mm/ksm/stable_node_dups Number of stable node dups queued into the stable_node chains.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.8_technical_notes/chap-red_hat_enterprise_linux-6.8_technical_notes-kernel_parameters_changes
Chapter 11. Messaging Endpoints
Chapter 11. Messaging Endpoints Abstract The messaging endpoint patterns describe various features and qualities of service that can be configured on an endpoint. 11.1. Messaging Mapper Overview The messaging mapper pattern describes how to map domain objects to and from a canonical message format, where the message format is chosen to be as platform neutral as possible. The chosen message format should be suitable for transmission through a Section 6.5, "Message Bus" , where the message bus is the backbone for integrating a variety of different systems, some of which might not be object-oriented. Many different approaches are possible, but not all of them fulfill the requirements of a messaging mapper. For example, an obvious way to transmit an object is to use object serialization , which enables you to write an object to a data stream using an unambiguous encoding (supported natively in Java). However, this is not a suitable approach to use for the messaging mapper pattern because the serialization format is understood only by Java applications. Java object serialization creates an impedance mismatch between the original application and the other applications in the messaging system. The requirements for a messaging mapper can be summarized as follows: The canonical message format used to transmit domain objects should be suitable for consumption by non-object oriented applications. The mapper code should be implemented separately from both the domain object code and the messaging infrastructure. Apache Camel helps fulfill this requirement by providing hooks that can be used to insert mapper code into a route. The mapper might need to find an effective way of dealing with certain object-oriented concepts such as inheritance, object references, and object trees. The complexity of these issues varies from application to application, but the aim of the mapper implementation must always be to create messages that can be processed effectively by non-object-oriented applications. Finding objects to map You can use one of the following mechanisms to find the objects to map: Find a registered bean. - For singleton objects and small numbers of objects, you could use the CamelContext registry to store references to beans. For example, if a bean instance is instantiated using Spring XML, it is automatically entered into the registry, where the bean is identified by the value of its id attribute. Select objects using the JoSQL language. - If all of the objects you want to access are already instantiated at runtime, you could use the JoSQL language to locate a specific object (or objects). For example, if you have a class, org.apache.camel.builder.sql.Person , with a name bean property and the incoming message has a UserName header, you could select the object whose name property equals the value of the UserName header using the following code: Where the syntax, : HeaderName , is used to substitute the value of a header in a JoSQL expression. Dynamic - For a more scalable solution, it might be necessary to read object data from a database. In some cases, the existing object-oriented application might already provide a finder object that can load objects from the database. In other cases, you might have to write some custom code to extract objects from a database, and in these cases the JDBC component and the SQL component might be useful. 11.2. Event Driven Consumer Overview The event-driven consumer pattern, shown in Figure 11.1, "Event Driven Consumer Pattern" , is a pattern for implementing the consumer endpoint in a Apache Camel component, and is only relevant to programmers who need to develop a custom component in Apache Camel. Existing components already have a consumer implementation pattern hard-wired into them. Figure 11.1. Event Driven Consumer Pattern Consumers that conform to this pattern provide an event method that is automatically called by the messaging channel or transport layer whenever an incoming message is received. One of the characteristics of the event-driven consumer pattern is that the consumer endpoint itself does not provide any threads to process the incoming messages. Instead, the underlying transport or messaging channel implicitly provides a processor thread when it invokes the exposed event method (which blocks for the duration of the message processing). For more details about this implementation pattern, see Section 38.1.3, "Consumer Patterns and Threading" and Chapter 41, Consumer Interface . 11.3. Polling Consumer Overview The polling consumer pattern, shown in Figure 11.2, "Polling Consumer Pattern" , is a pattern for implementing the consumer endpoint in a Apache Camel component, so it is only relevant to programmers who need to develop a custom component in Apache Camel. Existing components already have a consumer implementation pattern hard-wired into them. Consumers that conform to this pattern expose polling methods, receive() , receive(long timeout) , and receiveNoWait() that return a new exchange object, if one is available from the monitored resource. A polling consumer implementation must provide its own thread pool to perform the polling. For more details about this implementation pattern, see Section 38.1.3, "Consumer Patterns and Threading" , Chapter 41, Consumer Interface , and Section 37.3, "Using the Consumer Template" . Figure 11.2. Polling Consumer Pattern Scheduled poll consumer Many of the Apache Camel consumer endpoints employ a scheduled poll pattern to receive messages at the start of a route. That is, the endpoint appears to implement an event-driven consumer interface, but internally a scheduled poll is used to monitor a resource that provides the incoming messages for the endpoint. See Section 41.2, "Implementing the Consumer Interface" for details of how to implement this pattern. Quartz component You can use the quartz component to provide scheduled delivery of messages using the Quartz enterprise scheduler. See Quartz in the Apache Camel Component Reference Guide and Quartz Component for details. 11.4. Competing Consumers Overview The competing consumers pattern, shown in Figure 11.3, "Competing Consumers Pattern" , enables multiple consumers to pull messages from the same queue, with the guarantee that each message is consumed once only . This pattern can be used to replace serial message processing with concurrent message processing (bringing a corresponding reduction in response latency). Figure 11.3. Competing Consumers Pattern The following components demonstrate the competing consumers pattern: JMS based competing consumers SEDA based competing consumers JMS based competing consumers A regular JMS queue implicitly guarantees that each message can only be consumed at once. Hence, a JMS queue automatically supports the competing consumers pattern. For example, you could define three competing consumers that pull messages from the JMS queue, HighVolumeQ , as follows: Where the CXF (Web services) endpoints, replica01 , replica02 , and replica03 , process messages from the HighVolumeQ queue in parallel. Alternatively, you can set the JMS query option, concurrentConsumers , to create a thread pool of competing consumers. For example, the following route creates a pool of three competing threads that pick messages from the specified queue: And the concurrentConsumers option can also be specified in XML DSL, as follows: Note JMS topics cannot support the competing consumers pattern. By definition, a JMS topic is intended to send multiple copies of the same message to different consumers. Therefore, it is not compatible with the competing consumers pattern. SEDA based competing consumers The purpose of the SEDA component is to simplify concurrent processing by breaking the computation into stages. A SEDA endpoint essentially encapsulates an in-memory blocking queue (implemented by java.util.concurrent.BlockingQueue ). Therefore, you can use a SEDA endpoint to break a route into stages, where each stage might use multiple threads. For example, you can define a SEDA route consisting of two stages, as follows: Where the first stage contains a single thread that consumes message from a file endpoint, file://var/messages , and routes them to a SEDA endpoint, seda:fanout . The second stage contains three threads: a thread that routes exchanges to cxf:bean:replica01 , a thread that routes exchanges to cxf:bean:replica02 , and a thread that routes exchanges to cxf:bean:replica03 . These three threads compete to take exchange instances from the SEDA endpoint, which is implemented using a blocking queue. Because the blocking queue uses locking to prevent more than one thread from accessing the queue at a time, you are guaranteed that each exchange instance can only be consumed once. For a discussion of the differences between a SEDA endpoint and a thread pool created by thread() , see SEDA component in the Apache Camel Component Reference Guide . 11.5. Message Dispatcher Overview The message dispatcher pattern, shown in Figure 11.4, "Message Dispatcher Pattern" , is used to consume messages from a channel and then distribute them locally to performers , which are responsible for processing the messages. In a Apache Camel application, performers are usually represented by in-process endpoints, which are used to transfer messages to another section of the route. Figure 11.4. Message Dispatcher Pattern You can implement the message dispatcher pattern in Apache Camel using one of the following approaches: JMS selectors JMS selectors in ActiveMQ Content-based router JMS selectors If your application consumes messages from a JMS queue, you can implement the message dispatcher pattern using JMS selectors . A JMS selector is a predicate expression involving JMS headers and JMS properties. If the selector evaluates to true , the JMS message is allowed to reach the consumer, and if the selector evaluates to false , the JMS message is blocked. In many respects, a JMS selector is like a Section 8.2, "Message Filter" , but it has the additional advantage that the filtering is implemented inside the JMS provider. This means that a JMS selector can block messages before they are transmitted to the Apache Camel application. This provides a significant efficiency advantage. In Apache Camel, you can define a JMS selector on a consumer endpoint by setting the selector query option on a JMS endpoint URI. For example: Where the predicates that appear in a selector string are based on a subset of the SQL92 conditional expression syntax (for full details, see the JMS specification ). The identifiers appearing in a selector string can refer either to JMS headers or to JMS properties. For example, in the preceding routes, the sender sets a JMS property called CountryCode . If you want to add a JMS property to a message from within your Apache Camel application, you can do so by setting a message header (either on In message or on Out messages). When reading or writing to JMS endpoints, Apache Camel maps JMS headers and JMS properties to, and from, its native message headers. Technically, the selector strings must be URL encoded according to the application/x-www-form-urlencoded MIME format (see the HTML specification ). In practice, the & (ampersand) character might cause difficulties because it is used to delimit each query option in the URI. For more complex selector strings that might need to embed the & character, you can encode the strings using the java.net.URLEncoder utility class. For example: Where the UTF-8 encoding must be used. JMS selectors in ActiveMQ You can also define JMS selectors on ActiveMQ endpoints. For example: For more details, see ActiveMQ: JMS Selectors and ActiveMQ Message Properties . Content-based router The essential difference between the content-based router pattern and the message dispatcher pattern is that a content-based router dispatches messages to physically separate destinations (remote endpoints), and a message dispatcher dispatches messages locally, within the same process space. In Apache Camel, the distinction between these two patterns is determined by the target endpoint. The same router logic is used to implement both a content-based router and a message dispatcher. When the target endpoint is remote, the route defines a content-based router. When the target endpoint is in-process, the route defines a message dispatcher. For details and examples of how to use the content-based router pattern see Section 8.1, "Content-Based Router" . 11.6. Selective Consumer Overview The selective consumer pattern, shown in Figure 11.5, "Selective Consumer Pattern" , describes a consumer that applies a filter to incoming messages, so that only messages meeting specific selection criteria are processed. Figure 11.5. Selective Consumer Pattern You can implement the selective consumer pattern in Apache Camel using one of the following approaches: JMS selector JMS selector in ActiveMQ Message filter JMS selector A JMS selector is a predicate expression involving JMS headers and JMS properties. If the selector evaluates to true , the JMS message is allowed to reach the consumer, and if the selector evaluates to false , the JMS message is blocked. For example, to consume messages from the queue, selective , and select only those messages whose country code property is equal to US , you can use the following Java DSL route: Where the selector string, CountryCode='US' , must be URL encoded (using UTF-8 characters) to avoid trouble with parsing the query options. This example presumes that the JMS property, CountryCode , is set by the sender. For more details about JMS selectors, see the section called "JMS selectors" . Note If a selector is applied to a JMS queue, messages that are not selected remain on the queue and are potentially available to other consumers attached to the same queue. JMS selector in ActiveMQ You can also define JMS selectors on ActiveMQ endpoints. For example: For more details, see ActiveMQ: JMS Selectors and ActiveMQ Message Properties . Message filter If it is not possible to set a selector on the consumer endpoint, you can insert a filter processor into your route instead. For example, you can define a selective consumer that processes only messages with a US country code using Java DSL, as follows: The same route can be defined using XML configuration, as follows: For more information about the Apache Camel filter processor, see Section 8.2, "Message Filter" . Warning Be careful about using a message filter to select messages from a JMS queue . When using a filter processor, blocked messages are simply discarded. Hence, if the messages are consumed from a queue (which allows each message to be consumed only once - see Section 11.4, "Competing Consumers" ), then blocked messages are not processed at all. This might not be the behavior you want. 11.7. Durable Subscriber Overview A durable subscriber , as shown in Figure 11.6, "Durable Subscriber Pattern" , is a consumer that wants to receive all of the messages sent over a particular Section 6.2, "Publish-Subscribe Channel" channel, including messages sent while the consumer is disconnected from the messaging system. This requires the messaging system to store messages for later replay to the disconnected consumer. There also has to be a mechanism for a consumer to indicate that it wants to establish a durable subscription. Generally, a publish-subscribe channel (or topic) can have both durable and non-durable subscribers, which behave as follows: non-durable subscriber - Can have two states: connected and disconnected . While a non-durable subscriber is connected to a topic, it receives all of the topic's messages in real time. However, a non-durable subscriber never receives messages sent to the topic while the subscriber is disconnected. durable subscriber - Can have two states: connected and inactive . The inactive state means that the durable subscriber is disconnected from the topic, but wants to receive the messages that arrive in the interim. When the durable subscriber reconnects to the topic, it receives a replay of all the messages sent while it was inactive. Figure 11.6. Durable Subscriber Pattern JMS durable subscriber The JMS component implements the durable subscriber pattern. In order to set up a durable subscription on a JMS endpoint, you must specify a client ID , which identifies this particular connection, and a durable subscription name , which identifies the durable subscriber. For example, the following route sets up a durable subscription to the JMS topic, news , with a client ID of conn01 and a durable subscription name of John.Doe : You can also set up a durable subscription using the ActiveMQ endpoint: If you want to process the incoming messages concurrently, you can use a SEDA endpoint to fan out the route into multiple, parallel segments, as follows: Where each message is processed only once, because the SEDA component supports the competing consumers pattern. Alternative example Another alternative is to combine the Section 11.5, "Message Dispatcher" or Section 8.1, "Content-Based Router" with File or JPA components for durable subscribers then something like SEDA for non-durable. Here is a simple example of creating durable subscribers to a JMS topic Using the Fluent Builders Using the Spring XML Extensions Here is another example of JMS durable subscribers, but this time using virtual topics (recommended by AMQ over durable subscriptions) Using the Fluent Builders Using the Spring XML Extensions 11.8. Idempotent Consumer Overview The idempotent consumer pattern is used to filter out duplicate messages. For example, consider a scenario where the connection between a messaging system and a consumer endpoint is abruptly lost due to some fault in the system. If the messaging system was in the middle of transmitting a message, it might be unclear whether or not the consumer received the last message. To improve delivery reliability, the messaging system might decide to redeliver such messages as soon as the connection is re-established. Unfortunately, this entails the risk that the consumer might receive duplicate messages and, in some cases, the effect of duplicating a message may have undesirable consequences (such as debiting a sum of money twice from your account). In this scenario, an idempotent consumer could be used to weed out undesired duplicates from the message stream. Camel provides the following Idempotent Consumer implementations: MemoryIdempotentRepository KafkaIdempotentRepository File Hazelcast SQL JPA Idempotent consumer with in-memory cache In Apache Camel, the idempotent consumer pattern is implemented by the idempotentConsumer() processor, which takes two arguments: messageIdExpression - An expression that returns a message ID string for the current message. messageIdRepository - A reference to a message ID repository, which stores the IDs of all the messages received. As each message comes in, the idempotent consumer processor looks up the current message ID in the repository to see if this message has been seen before. If yes, the message is discarded; if no, the message is allowed to pass and its ID is added to the repository. The code shown in Example 11.1, "Filtering Duplicate Messages with an In-memory Cache" uses the TransactionID header to filter out duplicates. Example 11.1. Filtering Duplicate Messages with an In-memory Cache Where the call to memoryMessageIdRepository(200) creates an in-memory cache that can hold up to 200 message IDs. You can also define an idempotent consumer using XML configuration. For example, you can define the preceding route in XML, as follows: Note From Camel 2.17, Idempotent Repository supports optional serialized headers. Idempotent consumer with JPA repository The in-memory cache suffers from the disadvantages of easily running out of memory and not working in a clustered environment. To overcome these disadvantages, you can use a Java Persistent API (JPA) based repository instead. The JPA message ID repository uses an object-oriented database to store the message IDs. For example, you can define a route that uses a JPA repository for the idempotent consumer, as follows: The JPA message ID repository is initialized with two arguments: JpaTemplate instance - Provides the handle for the JPA database. processor name - Identifies the current idempotent consumer processor. The SpringRouteBuilder.bean() method is a shortcut that references a bean defined in the Spring XML file. The JpaTemplate bean provides a handle to the underlying JPA database. See the JPA documentation for details of how to configure this bean. For more details about setting up a JPA repository, see JPA Component documentation, the Spring JPA documentation, and the sample code in the Camel JPA unit test . Spring XML example The following example uses the myMessageId header to filter out duplicates: Idempotent consumer with JDBC repository A JDBC repository is also supported for storing message IDs in the idempotent consumer pattern. The implementation of the JDBC repository is provided by the SQL component, so if you are using the Maven build system, add a dependency on the camel-sql artifact. You can use the SingleConnectionDataSource JDBC wrapper class from the Spring persistence API in order to instantiate the connection to a SQL database. For example, to instantiate a JDBC connection to a HyperSQL database instance, you could define the following JDBC data source: Note The preceding JDBC data source uses the HyperSQL mem protocol, which creates a memory-only database instance. This is a toy implementation of the HyperSQL database which is not actually persistent. Using the preceding data source, you can define an idempotent consumer pattern that uses the JDBC message ID repository, as follows: How to handle duplicate messages in the route Available as of Camel 2.8 You can now set the skipDuplicate option to false which instructs the idempotent consumer to route duplicate messages as well. However the duplicate message has been marked as duplicate by having a property on the the section called "Exchanges" set to true. We can leverage this fact by using a Section 8.1, "Content-Based Router" or Section 8.2, "Message Filter" to detect this and handle duplicate messages. For example in the following example we use the Section 8.2, "Message Filter" to send the message to a duplicate endpoint, and then stop continue routing that message. The sample example in XML DSL would be: How to handle duplicate message in a clustered environment with a data grid If you have running Camel in a clustered environment, a in memory idempotent repository doesn't work (see above). You can setup either a central database or use the idempotent consumer implementation based on the Hazelcast data grid. Hazelcast finds the nodes over multicast (which is default - configure Hazelcast for tcp-ip) and creates automatically a map based repository: You have to define how long the repository should hold each message id (default is to delete it never). To avoid that you run out of memory you should create an eviction strategy based on the Hazelcast configuration . For additional information see Hazelcast . See this link:http://camel.apache.org/hazelcast-idempotent-repository-tutorial.html[Idempotent Repository tutorial] to learn more about how to setup such an idempotent repository on two cluster nodes using Apache Karaf. Options The Idempotent Consumer has the following options: Option Default Description eager true Camel 2.0: Eager controls whether Camel adds the message to the repository before or after the exchange has been processed. If enabled before then Camel will be able to detect duplicate messages even when messages are currently in progress. By disabling Camel will only detect duplicates when a message has successfully been processed. messageIdRepositoryRef null A reference to a IdempotentRepository to lookup in the registry. This option is mandatory when using XML DSL. skipDuplicate true Camel 2.8: Sets whether to skip duplicate messages. If set to false then the message will be continued. However the the section called "Exchanges" has been marked as a duplicate by having the Exchange.DUPLICATE_MESSAG exchange property set to a Boolean.TRUE value. completionEager false Camel 2.16: Sets whether to complete the Idempotent consumer eager, when the exchange is done. If you set the completeEager option true, then the Idempotent Consumer triggers its completion when the exchange reaches till the end of the idempotent consumer pattern block. However, if the exchange continues to route even after the end block, then it does not affect the state of the idempotent consumer. If you set the completeEager option false, then the Idempotent Consumer triggers its completion after the exchange is done and is being routed. However, if the exchange continues to route even after the block ends, then it also affects the state of the idempotent consumer. For example, due to an exception if the exchange fails, then the state of the idempotent consumer will be a rollback. 11.9. Transactional Client Overview The transactional client pattern, shown in Figure 11.7, "Transactional Client Pattern" , refers to messaging endpoints that can participate in a transaction. Apache Camel supports transactions using Spring transaction management . Figure 11.7. Transactional Client Pattern Transaction oriented endpoints Not all Apache Camel endpoints support transactions. Those that do are called transaction oriented endpoints (or TOEs). For example, both the JMS component and the ActiveMQ component support transactions. To enable transactions on a component, you must perform the appropriate initialization before adding the component to the CamelContext . This entails writing code to initialize your transactional components explicitly. References The details of configuring transactions in Apache Camel are beyond the scope of this guide. For full details of how to use transactions, see the Apache Camel Transaction Guide . 11.10. Messaging Gateway Overview The messaging gateway pattern, shown in Figure 11.8, "Messaging Gateway Pattern" , describes an approach to integrating with a messaging system, where the messaging system's API remains hidden from the programmer at the application level. One of the more common example is when you want to translate synchronous method calls into request/reply message exchanges, without the programmer being aware of this. Figure 11.8. Messaging Gateway Pattern The following Apache Camel components provide this kind of integration with the messaging system: CXF Bean component 11.11. Service Activator Overview The service activator pattern, shown in Figure 11.9, "Service Activator Pattern" , describes the scenario where a service's operations are invoked in response to an incoming request message. The service activator identifies which operation to call and extracts the data to use as the operation's parameters. Finally, the service activator invokes an operation using the data extracted from the message. The operation invocation can be either oneway (request only) or two-way (request/reply). Figure 11.9. Service Activator Pattern In many respects, a service activator resembles a conventional remote procedure call (RPC), where operation invocations are encoded as messages. The main difference is that a service activator needs to be more flexible. An RPC framework standardizes the request and reply message encodings (for example, Web service operations are encoded as SOAP messages), whereas a service activator typically needs to improvise the mapping between the messaging system and the service's operations. Bean integration The main mechanism that Apache Camel provides to support the service activator pattern is bean integration . Bean integration provides a general framework for mapping incoming messages to method invocations on Java objects. For example, the Java fluent DSL provides the processors bean() and beanRef() that you can insert into a route to invoke methods on a registered Java bean. The detailed mapping of message data to Java method parameters is determined by the bean binding , which can be implemented by adding annotations to the bean class. For example, consider the following route which calls the Java method, BankBean.getUserAccBalance() , to service requests incoming on a JMS/ActiveMQ queue: The messages pulled from the ActiveMQ endpoint, activemq:BalanceQueries , have a simple XML format that provides the user ID of a bank account. For example: The first processor in the route, setProperty() , extracts the user ID from the In message and stores it in the userid exchange property. This is preferable to storing it in a header, because the In headers are not available after invoking the bean. The service activation step is performed by the beanRef() processor, which binds the incoming message to the getUserAccBalance() method on the Java object identified by the bankBean bean ID. The following code shows a sample implementation of the BankBean class: Where the binding of message data to method parameter is enabled by the @XPath annotation, which injects the content of the UserID XML element into the user method parameter. On completion of the call, the return value is inserted into the body of the Out message which is then copied into the In message for the step in the route. In order for the bean to be accessible to the beanRef() processor, you must instantiate an instance in Spring XML. For example, you can add the following lines to the META-INF/spring/camel-context.xml configuration file to instantiate the bean: Where the bean ID, bankBean , identifes this bean instance in the registry. The output of the bean invocation is injected into a Velocity template, to produce a properly formatted result message. The Velocity endpoint, velocity:file:src/scripts/acc_balance.vm , specifies the location of a velocity script with the following contents: The exchange instance is available as the Velocity variable, exchange , which enables you to retrieve the userid exchange property, using USD{exchange.getProperty("userid")} . The body of the current In message, USD{body} , contains the result of the getUserAccBalance() method invocation.
[ "import static org.apache.camel.builder.sql.SqlBuilder.sql; import org.apache.camel.Expression; Expression expression = sql(\"SELECT * FROM org.apache.camel.builder.sql.Person where name = :UserName\"); Object value = expression.evaluate(exchange);", "from(\"jms:HighVolumeQ\").to(\"cxf:bean:replica01\"); from(\"jms:HighVolumeQ\").to(\"cxf:bean:replica02\"); from(\"jms:HighVolumeQ\").to(\"cxf:bean:replica03\");", "from(\"jms:HighVolumeQ?concurrentConsumers=3\").to(\"cxf:bean:replica01\");", "<route> <from uri=\"jms:HighVolumeQ?concurrentConsumers=3\"/> <to uri=\"cxf:bean:replica01\"/> </route>", "// Stage 1: Read messages from file system. from(\"file://var/messages\").to(\"seda:fanout\"); // Stage 2: Perform concurrent processing (3 threads). from(\"seda:fanout\").to(\"cxf:bean:replica01\"); from(\"seda:fanout\").to(\"cxf:bean:replica02\"); from(\"seda:fanout\").to(\"cxf:bean:replica03\");", "from(\"jms:dispatcher?selector=CountryCode='US'\").to(\"cxf:bean:replica01\"); from(\"jms:dispatcher?selector=CountryCode='IE'\").to(\"cxf:bean:replica02\"); from(\"jms:dispatcher?selector=CountryCode='DE'\").to(\"cxf:bean:replica03\");", "from(\"jms:dispatcher?selector=\" + java.net.URLEncoder.encode(\"CountryCode='US'\",\"UTF-8\")). to(\"cxf:bean:replica01\");", "from(\"activemq:dispatcher?selector=CountryCode='US'\").to(\"cxf:bean:replica01\"); from(\"activemq:dispatcher?selector=CountryCode='IE'\").to(\"cxf:bean:replica02\"); from(\"activemq:dispatcher?selector=CountryCode='DE'\").to(\"cxf:bean:replica03\");", "from(\"jms:selective?selector=\" + java.net.URLEncoder.encode(\"CountryCode='US'\",\"UTF-8\")). to(\"cxf:bean:replica01\");", "from(\"acivemq:selective?selector=\" + java.net.URLEncoder.encode(\"CountryCode='US'\",\"UTF-8\")). to(\"cxf:bean:replica01\");", "from(\"seda:a\").filter(header(\"CountryCode\").isEqualTo(\"US\")).process(myProcessor);", "<camelContext id=\"buildCustomProcessorWithFilter\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:a\"/> <filter> <xpath>USDCountryCode = 'US'</xpath> <process ref=\"#myProcessor\"/> </filter> </route> </camelContext>", "from(\"jms:topic:news?clientId=conn01&durableSubscriptionName=John.Doe\"). to(\"cxf:bean:newsprocessor\");", "from(\"activemq:topic:news?clientId=conn01&durableSubscriptionName=John.Doe\"). to(\"cxf:bean:newsprocessor\");", "from(\"jms:topic:news?clientId=conn01&durableSubscriptionName=John.Doe\"). to(\"seda:fanout\"); from(\"seda:fanout\").to(\"cxf:bean:newsproc01\"); from(\"seda:fanout\").to(\"cxf:bean:newsproc02\"); from(\"seda:fanout\").to(\"cxf:bean:newsproc03\");", "from(\"direct:start\").to(\"activemq:topic:foo\"); from(\"activemq:topic:foo?clientId=1&durableSubscriptionName=bar1\").to(\"mock:result1\"); from(\"activemq:topic:foo?clientId=2&durableSubscriptionName=bar2\").to(\"mock:result2\");", "<route> <from uri=\"direct:start\"/> <to uri=\"activemq:topic:foo\"/> </route> <route> <from uri=\"activemq:topic:foo?clientId=1&durableSubscriptionName=bar1\"/> <to uri=\"mock:result1\"/> </route> <route> <from uri=\"activemq:topic:foo?clientId=2&durableSubscriptionName=bar2\"/> <to uri=\"mock:result2\"/> </route>", "from(\"direct:start\").to(\"activemq:topic:VirtualTopic.foo\"); from(\"activemq:queue:Consumer.1.VirtualTopic.foo\").to(\"mock:result1\"); from(\"activemq:queue:Consumer.2.VirtualTopic.foo\").to(\"mock:result2\");", "<route> <from uri=\"direct:start\"/> <to uri=\"activemq:topic:VirtualTopic.foo\"/> </route> <route> <from uri=\"activemq:queue:Consumer.1.VirtualTopic.foo\"/> <to uri=\"mock:result1\"/> </route> <route> <from uri=\"activemq:queue:Consumer.2.VirtualTopic.foo\"/> <to uri=\"mock:result2\"/> </route>", "import static org.apache.camel.processor.idempotent.MemoryMessageIdRepository.memoryMessageIdRepository; RouteBuilder builder = new RouteBuilder() { public void configure() { from(\"seda:a\") .idempotentConsumer( header(\"TransactionID\"), memoryMessageIdRepository(200) ).to(\"seda:b\"); } };", "<camelContext id=\"buildIdempotentConsumer\" xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"seda:a\"/> <idempotentConsumer messageIdRepositoryRef=\"MsgIDRepos\"> <simple>header.TransactionID</simple> <to uri=\"seda:b\"/> </idempotentConsumer> </route> </camelContext> <bean id=\"MsgIDRepos\" class=\"org.apache.camel.processor.idempotent.MemoryMessageIdRepository\"> <!-- Specify the in-memory cache size. --> <constructor-arg type=\"int\" value=\"200\"/> </bean>", "import org.springframework.orm.jpa.JpaTemplate; import org.apache.camel.spring.SpringRouteBuilder; import static org.apache.camel.processor.idempotent.jpa.JpaMessageIdRepository.jpaMessageIdRepository; RouteBuilder builder = new SpringRouteBuilder() { public void configure() { from(\"seda:a\").idempotentConsumer( header(\"TransactionID\"), jpaMessageIdRepository(bean(JpaTemplate.class), \"myProcessorName\") ).to(\"seda:b\"); } };", "<!-- repository for the idempotent consumer --> <bean id=\"myRepo\" class=\"org.apache.camel.processor.idempotent.MemoryIdempotentRepository\"/> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <idempotentConsumer messageIdRepositoryRef=\"myRepo\"> <!-- use the messageId header as key for identifying duplicate messages --> <header>messageId</header> <!-- if not a duplicate send it to this mock endpoint --> <to uri=\"mock:result\"/> </idempotentConsumer> </route> </camelContext>", "<bean id=\"dataSource\" class=\"org.springframework.jdbc.datasource.SingleConnectionDataSource\"> <property name=\"driverClassName\" value=\"org.hsqldb.jdbcDriver\"/> <property name=\"url\" value=\"jdbc:hsqldb:mem:camel_jdbc\"/> <property name=\"username\" value=\"sa\"/> <property name=\"password\" value=\"\"/> </bean>", "<bean id=\"messageIdRepository\" class=\"org.apache.camel.processor.idempotent.jdbc.JdbcMessageIdRepository\"> <constructor-arg ref=\"dataSource\" /> <constructor-arg value=\"myProcessorName\" /> </bean> <camel:camelContext> <camel:errorHandler id=\"deadLetterChannel\" type=\"DeadLetterChannel\" deadLetterUri=\"mock:error\"> <camel:redeliveryPolicy maximumRedeliveries=\"0\" maximumRedeliveryDelay=\"0\" logStackTrace=\"false\" /> </camel:errorHandler> <camel:route id=\"JdbcMessageIdRepositoryTest\" errorHandlerRef=\"deadLetterChannel\"> <camel:from uri=\"direct:start\" /> <camel:idempotentConsumer messageIdRepositoryRef=\"messageIdRepository\"> <camel:header>messageId</camel:header> <camel:to uri=\"mock:result\" /> </camel:idempotentConsumer> </camel:route> </camel:camelContext>", "from(\"direct:start\") // instruct idempotent consumer to not skip duplicates as we will filter then our self .idempotentConsumer(header(\"messageId\")).messageIdRepository(repo).skipDuplicate(false) .filter(property(Exchange.DUPLICATE_MESSAGE).isEqualTo(true)) // filter out duplicate messages by sending them to someplace else and then stop .to(\"mock:duplicate\") .stop() .end() // and here we process only new messages (no duplicates) .to(\"mock:result\");", "<!-- idempotent repository, just use a memory based for testing --> <bean id=\"myRepo\" class=\"org.apache.camel.processor.idempotent.MemoryIdempotentRepository\"/> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <!-- we do not want to skip any duplicate messages --> <idempotentConsumer messageIdRepositoryRef=\"myRepo\" skipDuplicate=\"false\"> <!-- use the messageId header as key for identifying duplicate messages --> <header>messageId</header> <!-- we will to handle duplicate messages using a filter --> <filter> <!-- the filter will only react on duplicate messages, if this property is set on the Exchange --> <property>CamelDuplicateMessage</property> <!-- and send the message to this mock, due its part of an unit test --> <!-- but you can of course do anything as its part of the route --> <to uri=\"mock:duplicate\"/> <!-- and then stop --> <stop/> </filter> <!-- here we route only new messages --> <to uri=\"mock:result\"/> </idempotentConsumer> </route> </camelContext>", "HazelcastIdempotentRepository idempotentRepo = new HazelcastIdempotentRepository(\"myrepo\"); from(\"direct:in\").idempotentConsumer(header(\"messageId\"), idempotentRepo).to(\"mock:out\");", "from(\"activemq:BalanceQueries\") .setProperty(\"userid\", xpath(\"/Account/BalanceQuery/UserID\").stringResult()) .beanRef(\"bankBean\", \"getUserAccBalance\") .to(\"velocity:file:src/scripts/acc_balance.vm\") .to(\"activemq:BalanceResults\");", "<?xml version='1.0' encoding='UTF-8'?> <Account> <BalanceQuery> <UserID>James.Strachan</UserID> </BalanceQuery> </Account>", "package tutorial; import org.apache.camel.language.XPath; public class BankBean { public int getUserAccBalance( @XPath(\"/Account/BalanceQuery/UserID\") String user) { if (user.equals(\"James.Strachan\")) { return 1200; } else { return 0; } } }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <beans ... > <bean id=\"bankBean\" class=\"tutorial.BankBean\"/> </beans>", "<?xml version='1.0' encoding='UTF-8'?> <Account> <BalanceResult> <UserID>USD{exchange.getProperty(\"userid\")}</UserID> <Balance>USD{body}</Balance> </BalanceResult> </Account>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/msgend
Chapter 10. Provisioning [metal3.io/v1alpha1]
Chapter 10. Provisioning [metal3.io/v1alpha1] Description Provisioning contains configuration used by the Provisioning service (Ironic) to provision baremetal hosts. Provisioning is created by the OpenShift installer using admin or user provided information about the provisioning network and the NIC on the server that can be used to PXE boot it. This CR is a singleton, created by the installer and currently only consumed by the cluster-baremetal-operator to bring up and update containers in a metal3 cluster. Type object 10.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ProvisioningSpec defines the desired state of Provisioning status object ProvisioningStatus defines the observed state of Provisioning 10.1.1. .spec Description ProvisioningSpec defines the desired state of Provisioning Type object Property Type Description bootIsoSource string BootIsoSource provides a way to set the location where the iso image to boot the nodes will be served from. By default the boot iso image is cached locally and served from the Provisioning service (Ironic) nodes using an auxiliary httpd server. If the boot iso image is already served by an httpd server, setting this option to http allows to directly provide the image from there; in this case, the network (either internal or external) where the httpd server that hosts the boot iso is needs to be accessible by the metal3 pod. disableVirtualMediaTLS boolean DisableVirtualMediaTLS turns off TLS on the virtual media server, which may be required for hardware that cannot accept HTTPS links. preProvisioningOSDownloadURLs object PreprovisioningOSDownloadURLs is set of CoreOS Live URLs that would be necessary to provision a worker either using virtual media or PXE. provisioningDHCPExternal boolean ProvisioningDHCPExternal indicates whether the DHCP server for IP addresses in the provisioning DHCP range is present within the metal3 cluster or external to it. This field is being deprecated in favor of provisioningNetwork. provisioningDHCPRange string ProvisioningDHCPRange needs to be interpreted along with ProvisioningDHCPExternal. If the value of provisioningDHCPExternal is set to False, then ProvisioningDHCPRange represents the range of IP addresses that the DHCP server running within the metal3 cluster can use while provisioning baremetal servers. If the value of ProvisioningDHCPExternal is set to True, then the value of ProvisioningDHCPRange will be ignored. When the value of ProvisioningDHCPExternal is set to False, indicating an internal DHCP server and the value of ProvisioningDHCPRange is not set, then the DHCP range is taken to be the default range which goes from .10 to .100 of the ProvisioningNetworkCIDR. This is the only value in all of the Provisioning configuration that can be changed after the installer has created the CR. This value needs to be two comma sererated IP addresses within the ProvisioningNetworkCIDR where the 1st address represents the start of the range and the 2nd address represents the last usable address in the range. provisioningDNS boolean ProvisioningDNS allows sending the DNS information via DHCP on the provisionig network. It is off by default since the Provisioning service itself (Ironic) does not require DNS, but it may be useful for layered products (e.g. ZTP). provisioningIP string ProvisioningIP is the IP address assigned to the provisioningInterface of the baremetal server. This IP address should be within the provisioning subnet, and outside of the DHCP range. provisioningInterface string ProvisioningInterface is the name of the network interface on a baremetal server to the provisioning network. It can have values like eth1 or ens3. provisioningMacAddresses array (string) ProvisioningMacAddresses is a list of mac addresses of network interfaces on a baremetal server to the provisioning network. Use this instead of ProvisioningInterface to allow interfaces of different names. If not provided it will be populated by the BMH.Spec.BootMacAddress of each master. provisioningNetwork string ProvisioningNetwork provides a way to indicate the state of the underlying network configuration for the provisioning network. This field can have one of the following values - Managed - when the provisioning network is completely managed by the Baremetal IPI solution. Unmanaged - when the provsioning network is present and used but the user is responsible for managing DHCP. Virtual media provisioning is recommended but PXE is still available if required. Disabled - when the provisioning network is fully disabled. User can bring up the baremetal cluster using virtual media or assisted installation. If using metal3 for power management, BMCs must be accessible from the machine networks. User should provide two IPs on the external network that would be used for provisioning services. provisioningNetworkCIDR string ProvisioningNetworkCIDR is the network on which the baremetal nodes are provisioned. The provisioningIP and the IPs in the dhcpRange all come from within this network. When using IPv6 and in a network managed by the Baremetal IPI solution this cannot be a network larger than a /64. provisioningOSDownloadURL string ProvisioningOSDownloadURL is the location from which the OS Image used to boot baremetal host machines can be downloaded by the metal3 cluster. virtualMediaViaExternalNetwork boolean VirtualMediaViaExternalNetwork flag when set to "true" allows for workers to boot via Virtual Media and contact metal3 over the External Network. When the flag is set to "false" (which is the default), virtual media deployments can still happen based on the configuration specified in the ProvisioningNetwork i.e when in Disabled mode, over the External Network and over Provisioning Network when in Managed mode. PXE deployments will always use the Provisioning Network and will not be affected by this flag. watchAllNamespaces boolean WatchAllNamespaces provides a way to explicitly allow use of this Provisioning configuration across all Namespaces. It is an optional configuration which defaults to false and in that state will be used to provision baremetal hosts in only the openshift-machine-api namespace. When set to true, this provisioning configuration would be used for baremetal hosts across all namespaces. 10.1.2. .spec.preProvisioningOSDownloadURLs Description PreprovisioningOSDownloadURLs is set of CoreOS Live URLs that would be necessary to provision a worker either using virtual media or PXE. Type object Property Type Description initramfsURL string InitramfsURL Image URL to be used for PXE deployments isoURL string IsoURL Image URL to be used for Live ISO deployments kernelURL string KernelURL is an Image URL to be used for PXE deployments rootfsURL string RootfsURL Image URL to be used for PXE deployments 10.1.3. .status Description ProvisioningStatus defines the observed state of Provisioning Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 10.1.4. .status.conditions Description conditions is a list of conditions and their status Type array 10.1.5. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 10.1.6. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 10.1.7. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 10.2. API endpoints The following API endpoints are available: /apis/metal3.io/v1alpha1/provisionings DELETE : delete collection of Provisioning GET : list objects of kind Provisioning POST : create a Provisioning /apis/metal3.io/v1alpha1/provisionings/{name} DELETE : delete a Provisioning GET : read the specified Provisioning PATCH : partially update the specified Provisioning PUT : replace the specified Provisioning /apis/metal3.io/v1alpha1/provisionings/{name}/status GET : read status of the specified Provisioning PATCH : partially update status of the specified Provisioning PUT : replace status of the specified Provisioning 10.2.1. /apis/metal3.io/v1alpha1/provisionings Table 10.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Provisioning Table 10.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind Provisioning Table 10.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 10.5. HTTP responses HTTP code Reponse body 200 - OK ProvisioningList schema 401 - Unauthorized Empty HTTP method POST Description create a Provisioning Table 10.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.7. Body parameters Parameter Type Description body Provisioning schema Table 10.8. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 202 - Accepted Provisioning schema 401 - Unauthorized Empty 10.2.2. /apis/metal3.io/v1alpha1/provisionings/{name} Table 10.9. Global path parameters Parameter Type Description name string name of the Provisioning Table 10.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Provisioning Table 10.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 10.12. Body parameters Parameter Type Description body DeleteOptions schema Table 10.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Provisioning Table 10.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.15. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Provisioning Table 10.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.17. Body parameters Parameter Type Description body Patch schema Table 10.18. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Provisioning Table 10.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.20. Body parameters Parameter Type Description body Provisioning schema Table 10.21. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 401 - Unauthorized Empty 10.2.3. /apis/metal3.io/v1alpha1/provisionings/{name}/status Table 10.22. Global path parameters Parameter Type Description name string name of the Provisioning Table 10.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Provisioning Table 10.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 10.25. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Provisioning Table 10.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 10.27. Body parameters Parameter Type Description body Patch schema Table 10.28. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Provisioning Table 10.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 10.30. Body parameters Parameter Type Description body Provisioning schema Table 10.31. HTTP responses HTTP code Reponse body 200 - OK Provisioning schema 201 - Created Provisioning schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/provisioning_apis/provisioning-metal3-io-v1alpha1
Chapter 3. Deployment of the Ceph File System
Chapter 3. Deployment of the Ceph File System As a storage administrator, you can deploy Ceph File Systems (CephFS) in a storage environment and have clients mount those Ceph File Systems to meet the storage needs. Basically, the deployment workflow is three steps: Create Ceph File Systems on a Ceph Monitor node. Create a Ceph client user with the appropriate capabilities, and make the client key available on the node where the Ceph File System will be mounted. Mount CephFS on a dedicated node, using either a kernel client or a File System in User Space (FUSE) client. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon ( ceph-mds ). 3.1. Layout, quota, snapshot, and network restrictions These user capabilities can help you restrict access to a Ceph File System (CephFS) based on the needed requirements. Important All user capability flags, except rw , must be specified in alphabetical order. Layouts and Quotas When using layouts or quotas, clients require the p flag, in addition to rw capabilities. Setting the p flag restricts all the attributes being set by special extended attributes, those with a ceph. prefix. Also, this restricts other means of setting these fields, such as openc operations with layouts. Example In this example, client.0 can modify layouts and quotas on the file system cephfs_a , but client.1 cannot. Snapshots When creating or deleting snapshots, clients require the s flag, in addition to rw capabilities. When the capability string also contains the p flag, the s flag must appear after it. Example In this example, client.0 can create or delete snapshots in the temp directory of file system cephfs_a . Network Restricting clients connecting from a particular network. Example The optional network and prefix length is in CIDR notation, for example, 10.3.0.0/16 . Additional Resources See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for details on setting the Ceph user capabilities. 3.2. Creating Ceph File Systems You can create multiple Ceph File Systems (CephFS) on a Ceph Monitor node. Prerequisites A running, and healthy Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon ( ceph-mds ). Root-level access to a Ceph Monitor node. Root-level access to a Ceph client node. Procedure Configure the client node to use the Ceph storage cluster. Enable the Red Hat Ceph Storage Tools repository: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Install the ceph-fuse package: Copy the Ceph client keyring from the Ceph Monitor node to the client node: Syntax Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address. Example Copy the Ceph configuration file from a Ceph Monitor node to the client node: Syntax Replace MONITOR_NODE_NAME with the Ceph Monitor host name or IP address. Example Set the appropriate permissions for the configuration file: Create a Ceph File System: Syntax Example Repeat this step to create additional file systems. Note By running this command, Ceph automatically creates the new pools, and deploys a new Ceph Metadata Server (MDS) daemon to support the new file system. This also configures the MDS affinity accordingly. Verify access to the new Ceph File System from a Ceph client. Authorize a Ceph client to access the new file system: Syntax Example Note Optionally, you can add a safety measure by specifying the root_squash option. This prevents accidental deletion scenarios by disallowing clients with a uid=0 or gid=0 to do write operations, but still allows read operations. Example In this example, root_squash is enabled for the file system cephfs01 , except within the /volumes directory tree. Important The Ceph client can only see the CephFS it is authorized for. Copy the Ceph user's keyring to the Ceph client node: Syntax Example On the Ceph client node, create a new directory: Syntax Example On the Ceph client node, mount the new Ceph File System: Syntax Example On the Ceph client node, list the directory contents of the new mount point, or create a file on the new mount point. Additional Resources See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File System Guide for more details. See the Mounting the Ceph File System as a kernel client section in the Red Hat Ceph Storage File System Guide for more details. See the Mounting the Ceph File System as a FUSE client section in the Red Hat Ceph Storage File System Guide for more details. See Ceph File System limitations and the POSIX standards section in the Red Hat Ceph Storage File System Guide for more details. See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more details. 3.3. Adding an erasure-coded pool to a Ceph File System By default, Ceph uses replicated pools for data pools. You can also add an additional erasure-coded data pool to the Ceph File System, if needed. Ceph File Systems (CephFS) backed by erasure-coded pools use less overall storage compared to Ceph File Systems backed by replicated pools. While erasure-coded pools use less overall storage, they also use more memory and processor resources than replicated pools. Important CephFS EC pools are for archival purpose only. Important For production environments, Red Hat recommends using the default replicated data pool for CephFS. The creation of inodes in CephFS creates at least one object in the default data pool. It is better to use a replicated pool for the default data to improve small-object write performance, and to improve read performance for updating backtraces. Prerequisites A running Red Hat Ceph Storage cluster. An existing Ceph File System. Pools using BlueStore OSDs. Root-level access to a Ceph Monitor node. Installation of the attr package. Procedure Create an erasure-coded data pool for CephFS: Syntax Example Verify the pool was added: Example Enable overwrites on the erasure-coded pool: Syntax Example Verify the status of the Ceph File System: Syntax Example Add the erasure-coded data pool to the existing CephFS: Syntax Example This example adds the new data pool, cephfs-data-ec01 , to the existing erasure-coded file system, cephfs-ec . Verify that the erasure-coded pool was added to the Ceph File System: Syntax Example Set the file layout on a new directory: Syntax Example In this example, all new files created in the /mnt/cephfs/newdir directory inherit the directory layout and places the data in the newly added erasure-coded pool. Additional Resources See The Ceph File System Metadata Server chapter in the Red Hat Ceph Storage File System Guide for more information about CephFS MDS. See the Creating Ceph File Systems section in the Red Hat Ceph Storage File System Guide for more information. See the Erasure Code Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more information. See the Erasure Coding with Overwrites section in the Red Hat Ceph Storage Storage Strategies Guide for more information. 3.4. Creating client users for a Ceph File System Red Hat Ceph Storage uses cephx for authentication, which is enabled by default. To use cephx with the Ceph File System, create a user with the correct authorization capabilities on a Ceph Monitor node and make its key available on the node where the Ceph File System will be mounted. Prerequisites A running Red Hat Ceph Storage cluster. Installation and configuration of the Ceph Metadata Server daemon (ceph-mds). Root-level access to a Ceph Monitor node. Root-level access to a Ceph client node. Procedure Log into the Cephadm shell on the monitor node: Example On a Ceph Monitor node, create a client user: Syntax To restrict the client to only writing in the temp directory of filesystem cephfs_a : Example To completely restrict the client to the temp directory, remove the root ( / ) directory: Example Note Supplying all or asterisk as the file system name grants access to every file system. Typically, it is necessary to quote the asterisk to protect it from the shell. Verify the created key: Syntax Example Copy the keyring to the client. On the Ceph Monitor node, export the keyring to a file: Syntax Example Copy the client keyring from the Ceph Monitor node to the /etc/ceph/ directory on the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client node name or IP. Example From the client node, set the appropriate permissions for the keyring file: Syntax Example Additional Resources See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details. 3.5. Mounting the Ceph File System as a kernel client You can mount the Ceph File System (CephFS) as a kernel client, either manually or automatically on system boot. Important Clients running on other Linux distributions, aside from Red Hat Enterprise Linux, are permitted but not supported. If issues are found in the CephFS Metadata Server or other parts of the storage cluster when using these clients, Red Hat will address them. If the cause is found to be on the client side, then the issue will have to be addressed by the kernel vendor of the Linux distribution. Prerequisites Root-level access to a Linux-based client node. Root-level access to a Ceph Monitor node. An existing Ceph File System. Procedure Configure the client node to use the Ceph storage cluster. Enable the Red Hat Ceph Storage 6 Tools repository: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Install the ceph-common package: Log into the Cephadm shell on the monitor node: Example Copy the Ceph client keyring from the Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example Copy the Ceph configuration file from a Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example From the client node, set the appropriate permissions for the configuration file: Choose either automatically or manually mounting. Manually Mounting Create a mount directory on the client node: Syntax Example Mount the Ceph File System. To specify multiple Ceph Monitor addresses, separate them with commas in the mount command, specify the mount point, and set the client name: Note As of Red Hat Ceph Storage 4.1, mount.ceph can read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID with name= CLIENT_ID , and mount.ceph will find the right keyring file. Syntax Example Note You can configure a DNS server so that a single host name resolves to multiple IP addresses. Then you can use that single host name with the mount command, instead of supplying a comma-separated list. Note You can also replace the Monitor host names with the string :/ and mount.ceph will read the Ceph configuration file to determine which Monitors to connect to. Note You can set the nowsync option to asynchronously execute file creation and removal on the Red Hat Ceph Storage clusters. This improves the performance of some workloads by avoiding round-trip latency for these system calls without impacting consistency. The nowsync option requires kernel clients with Red Hat Enterprise Linux 9.0 or later. Example Verify that the file system is successfully mounted: Syntax Example Automatically Mounting On the client host, create a new directory for mounting the Ceph File System. Syntax Example Edit the /etc/fstab file as follows: Syntax The first column sets the Ceph Monitor host names and the port number. The second column sets the mount point The third column sets the file system type, in this case, ceph , for CephFS. The fourth column sets the various options, such as, the user name and the secret file using the name and secretfile options. You can also set specific volumes, sub-volume groups, and sub-volumes using the ceph.client_mountpoint option. Set the _netdev option to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting the noatime option can increase performance. Set the fifth and sixth columns to zero. Example The Ceph File System will be mounted on the system boot. Note As of Red Hat Ceph Storage 4.1, mount.ceph can read keyring files directly. As such, a secret file is no longer necessary. Just specify the client ID with name= CLIENT_ID , and mount.ceph will find the right keyring file. Note You can also replace the Monitor host names with the string :/ and mount.ceph will read the Ceph configuration file to determine which Monitors to connect to. Additional Resources See the mount(8) manual page. See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user. See the Creating Ceph File Systems section of the Red Hat Ceph Storage File System Guide for details. 3.6. Mounting the Ceph File System as a FUSE client You can mount the Ceph File System (CephFS) as a File System in User Space (FUSE) client, either manually or automatically on system boot. Prerequisites Root-level access to a Linux-based client node. Root-level access to a Ceph Monitor node. An existing Ceph File System. Procedure Configure the client node to use the Ceph storage cluster. Enable the Red Hat Ceph Storage 6 Tools repository: Red Hat Enterprise Linux 8 Red Hat Enterprise Linux 9 Install the ceph-fuse package: Log into the Cephadm shell on the monitor node: Example Copy the Ceph client keyring from the Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example Copy the Ceph configuration file from a Ceph Monitor node to the client node: Syntax Replace CLIENT_NODE_NAME with the Ceph client host name or IP address. Example From the client node, set the appropriate permissions for the configuration file: Choose either automatically or manually mounting. Manually Mounting On the client node, create a directory for the mount point: Syntax Example Note If you used the path option with MDS capabilities, then the mount point must be within what is specified by the path . Use the ceph-fuse utility to mount the Ceph File System. Syntax Example Note If you do not use the default name and location of the user keyring, that is /etc/ceph/ceph.client. CLIENT_ID .keyring , then use the --keyring option to specify the path to the user keyring, for example: Example Note Use the -r option to instruct the client to treat that path as its root: Syntax Example Note If you want to automatically reconnect an evicted Ceph client, then add the --client_reconnect_stale=true option. Example Verify that the file system is successfully mounted: Syntax Example Automatically Mounting On the client node, create a directory for the mount point: Syntax Example Note If you used the path option with MDS capabilities, then the mount point must be within what is specified by the path . Edit the /etc/fstab file as follows: Syntax The first column sets the Ceph Monitor host names and the port number. The second column sets the mount point The third column sets the file system type, in this case, fuse.ceph , for CephFS. The fourth column sets the various options, such as the user name and the keyring using the ceph.name and ceph.keyring options. You can also set specific volumes, sub-volume groups, and sub-volumes using the ceph.client_mountpoint option. To specify which Ceph File System to access, use the ceph.client_fs option. Set the _netdev option to ensure that the file system is mounted after the networking subsystem starts to prevent hanging and networking issues. If you do not need access time information, then setting the noatime option can increase performance. If you want to automatically reconnect after an eviction, then set the client_reconnect_stale=true option. Set the fifth and sixth columns to zero. Example The Ceph File System will be mounted on the system boot. Additional Resources The ceph-fuse(8) manual page. See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for more details on creating a Ceph user. See the Creating Ceph File Systems section of the Red Hat Ceph Storage File System Guide for details. Additional Resources See Section 2.5, "Management of MDS service using the Ceph Orchestrator" to install Ceph Metadata servers. See Section 3.2, "Creating Ceph File Systems" for details. See Section 3.4, "Creating client users for a Ceph File System" for details. See Section 3.5, "Mounting the Ceph File System as a kernel client" for details. See Section 3.6, "Mounting the Ceph File System as a FUSE client" for details. See Chapter 2, The Ceph File System Metadata Server for details on configuring the CephFS Metadata Server daemon.
[ "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rwp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a client.1 key: AQAz7EVWygILFRAAdIcuJ11opU/JKyfFmxhuaw== caps: [mds] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8 caps: [mon] allow r network 10.0.0.0/8 caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ KEYRING_FILE /etc/ceph/", "scp [email protected]:/etc/ceph/ceph.client.1.keyring /etc/ceph/", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "scp [email protected]:/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "ceph fs volume create FILE_SYSTEM_NAME", "ceph fs volume create cephfs01", "ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME DIRECTORY PERMISSIONS", "ceph fs authorize cephfs01 client.1 / rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 exported keyring for client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph fs authorize cephfs01 client.1 / rw root_squash /volumes rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01 root_squash, allow rw fsname=cephfs01 path=/volumes\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph auth get CLIENT_NAME > OUTPUT_FILE_NAME scp OUTPUT_FILE_NAME TARGET_NODE_NAME :/etc/ceph", "ceph auth get client.1 > ceph.client.1.keyring exported keyring for client.1 scp ceph.client.1.keyring client:/etc/ceph root@client's password: ceph.client.1.keyring 100% 178 333.0KB/s 00:00", "mkdir PATH_TO_NEW_DIRECTORY_NAME", "mkdir /mnt/mycephfs", "ceph-fuse PATH_TO_NEW_DIRECTORY_NAME -n CEPH_USER_NAME --client-fs=_FILE_SYSTEM_NAME", "ceph-fuse /mnt/mycephfs/ -n client.1 --client-fs=cephfs01 ceph-fuse[555001]: starting ceph client 2022-05-09T07:33:27.158+0000 7f11feb81200 -1 init, newargv = 0x55fc4269d5d0 newargc=15 ceph-fuse[555001]: starting fuse", "ceph osd pool create DATA_POOL_NAME erasure", "ceph osd pool create cephfs-data-ec01 erasure pool 'cephfs-data-ec01' created", "ceph osd lspools", "ceph osd pool set DATA_POOL_NAME allow_ec_overwrites true", "ceph osd pool set cephfs-data-ec01 allow_ec_overwrites true set pool 15 allow_ec_overwrites to true", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "ceph fs add_data_pool FILE_SYSTEM_NAME DATA_POOL_NAME", "ceph fs add_data_pool cephfs-ec cephfs-data-ec01", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T cephfs-data-ec01 data 0 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "mkdir PATH_TO_DIRECTORY setfattr -n ceph.dir.layout.pool -v DATA_POOL_NAME PATH_TO_DIRECTORY", "mkdir /mnt/cephfs/newdir setfattr -n ceph.dir.layout.pool -v cephfs-data-ec01 /mnt/cephfs/newdir", "cephadm shell", "ceph fs authorize FILE_SYSTEM_NAME client. CLIENT_NAME / DIRECTORY CAPABILITY [/ DIRECTORY CAPABILITY ] PERMISSIONS", "ceph fs authorize cephfs_a client.1 / r /temp rw client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==", "ceph fs authorize cephfs_a client.1 /temp rw", "ceph auth get client. ID", "ceph auth get client.1 client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps mds = \"allow r, allow rw path=/temp\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"", "ceph auth get client. ID -o ceph.client. ID .keyring", "ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "chmod 644 ceph.client. ID .keyring", "chmod 644 /etc/ceph/ceph.client.1.keyring", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-common", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "mount -t ceph MONITOR-1_NAME :6789, MONITOR-2_NAME :6789, MONITOR-3_NAME :6789:/ MOUNT_POINT -o name= CLIENT_ID ,fs= FILE_SYSTEM_NAME", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,fs=cephfs01", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o nowsync,name=1,fs=cephfs01", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "#DEVICE PATH TYPE OPTIONS MON_0_HOST : PORT , MOUNT_POINT ceph name= CLIENT_ID , MON_1_HOST : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , fs= FILE_SYSTEM_NAME , MON_2_HOST : PORT :/q[_VOL_]/ SUB_VOL / UID_SUB_VOL , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs ceph name=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ fs=cephfs01, _netdev,noatime", "subscription-manager repos --enable=6-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID --client_fs FILE_SYSTEM_NAME MOUNT_POINT", "ceph-fuse -n client.1 --client_fs cephfs01 /mnt/mycephfs", "ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID MOUNT_POINT -r PATH", "ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs", "ceph-fuse -n client.1 /mnt/cephfs --client_reconnect_stale=true", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME : PORT , MOUNT_POINT fuse.ceph ceph.id= CLIENT_ID , 0 0 HOST_NAME : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , HOST_NAME : PORT :/ ceph.client_fs= FILE_SYSTEM_NAME ,ceph.name= USERNAME ,ceph.keyring=/etc/ceph/ KEYRING_FILE , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/mycephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ ceph.client_fs=cephfs01,ceph.name=client.1,ceph.keyring=/etc/ceph/client1.keyring, _netdev,defaults" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/file_system_guide/deployment-of-the-ceph-file-system
Chapter 3. Generating API keys to automate the functional certification processes
Chapter 3. Generating API keys to automate the functional certification processes A valid Pyxis API key is a token used to access partner-specific certification data through the REST API. With this key, you can directly access your certification components available in the Red Hat Partner Certification tool and seamlessly upload certification results without the need for repeated authentication. Note API keys are tied to specific Organization IDs, allowing access only to the product listings associated with that particular organization. Prerequisites Have access to the Red Hat Partner Certification tool. A valid component ID for your certification components on the Red Hat Partner Certification tool. Procedure Follow the procedure to generate the API keys: After logging in to the Red Hat Partner Certification tool, go to the Certification management tab. From the Resources listing, click API keys . The API Keys page opens. Click Add API key . The Add API Keys dialog opens. Enter a unique key in the Name text box. Enter a brief description about the newly created key including its purpose in the Description text box. Click Confirm . A new token is generated. The Copy Token wizard opens, allowing you to copy the generated API key. Click the copy icon to the generated API key to save it in a secured location. Important Red Hat doesn't store your API keys. If you don't save the key now, you can't retrieve it after closing this wizard. Verification The new API key is listed on the API Keys page including the following details: Name Description User Name Created Time Actions - Click Revoke , to delete the key.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/generating-api-keys-to-automate-the-functional-certification-processes_openshift-sw-cert-workflow-onboarding-certification-partners
Chapter 30. High Availability Using Server Hinting
Chapter 30. High Availability Using Server Hinting In Red Hat JBoss Data Grid, Server Hinting ensures that backed up copies of data are not stored on the same physical server, rack, or data center as the original. Server Hinting does not apply to total replication because total replication mandates complete replicas on every server, rack, and data center. Data distribution across nodes is controlled by the Consistent Hashing mechanism. JBoss Data Grid offers a pluggable policy to specify the consistent hashing algorithm. For details see Section 30.4, "ConsistentHashFactories" Setting a machineId , rackId , or siteId in the transport configuration will trigger the use of TopologyAwareConsistentHashFactory , which is the equivalent of the DefaultConsistentHashFactory with Server Hinting enabled. Server Hinting is particularly important when ensuring the high availability of your JBoss Data Grid implementation. Report a bug 30.1. Establishing Server Hinting with JGroups When setting up a clustered environment in Red Hat JBoss Data Grid, Server Hinting is configured when establishing JGroups configuration. JBoss Data Grid ships with several JGroups files pre-configured for clustered mode. These files can be used as a starting point when configuring Server Hinting in JBoss Data Grid. See Also: Section 26.2.2, "Pre-Configured JGroups Files" Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-High_Availability_Using_Server_Hinting
Appendix A. General configuration options
Appendix A. General configuration options These are the general configuration options for Ceph. Note Typically, these will be set automatically by deployment tools, such as cephadm . fsid Description The file system ID. One per cluster. Type UUID Required No. Default N/A. Usually generated by deployment tools. admin_socket Description The socket for executing administrative commands on a daemon, irrespective of whether Ceph monitors have established a quorum. Type String Required No Default /var/run/ceph/USDcluster-USDname.asok pid_file Description The file in which the monitor or OSD will write its PID. For instance, /var/run/USDcluster/USDtype.USDid.pid will create /var/run/ceph/mon.a.pid for the mon with id a running in the ceph cluster. The pid file is removed when the daemon stops gracefully. If the process is not daemonized (meaning it runs with the -f or -d option), the pid file is not created. Type String Required No Default No chdir Description The directory Ceph daemons change to once they are up and running. Default / directory recommended. Type String Required No Default / max_open_files Description If set, when the Red Hat Ceph Storage cluster starts, Ceph sets the max_open_fds at the OS level (that is, the max # of file descriptors). It helps prevent Ceph OSDs from running out of file descriptors. Type 64-bit Integer Required No Default 0 fatal_signal_handlers Description If set, we will install signal handlers for SEGV, ABRT, BUS, ILL, FPE, XCPU, XFSZ, SYS signals to generate a useful log message. Type Boolean Default true
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/configuration_guide/general-configuration-options_conf
5.261. python-rhsm
5.261. python-rhsm 5.261.1. RHBA-2012:0805 - python-rhsm bug fix and enhancement update Updated python-rhsm packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The python-rhsm package contains a library for communicating with the representational state transfer (REST) interface of Red Hat's subscription and content service. This interface is used by the Subscription Management tools for management of system entitlements, certificates, and access to content. Bug Fixes BZ# 720372 Previously, Subscription Manager had the fakamai-cp1.pem certificate installed in the /etc/rhsm/ca/fakamai-cp1.pem directory after installation. However, the certificate serves only for testing purposes and is not needed by the tool itself. With this update, the certificate has been removed. BZ# 744654 If the subscription-manager command was issued with an incorrect or empty --server.port option, the command failed with a traceback. With this update, the tool now sets the provided port value as expected and no traceback is returned. BZ# 803773 If the activation key contained non-ASCII characters, the registration failed with the following error: This happened due to an incorrect conversion of the key into the URL address. With this update, subscription-manager converts the characters correctly and the registration succeeds in the scenario described. BZ# 807721 Several configuration settings had no default value defined in the Red Hat Subscription Manager (RHSM), which could cause some commands to return a traceback. The default RHSM values are now set as expected and the problem no longer occurs. BZ# 822965 When the user defined a proxy server in rhsm.conf, the Subscription Manager did not work and returned the "unknown URL type" error. This happened because the "Host" header was not sent up to the CDN when acquiring the list of releases using a proxy. With this update, the "Host" header is sent to the CDN and the proxy definition in rhsm.conf is processed as expected. Enhancement BZ# 785247 The bug fixes and some of the new features introduced in the python-rhsm package on Red Hat Enterprise Linux 5.8 have been backported into the python-rhsm on Red Hat Enterprise Linux 6.3. All users of python-rhsm are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
[ "Network error. Please check the connection details, or see /var/log/rhsm/rhsm.log for more information." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/python-rhsm
Chapter 1. Release schedule
Chapter 1. Release schedule The following table lists the dates of the Red Hat OpenStack Platform 17.1 GA, along with the dates of each subsequent asynchronous release for core components: Table 1.1. Red Hat OpenStack Platform 17.1 core component release schedule Release Date 17.1.0 (General Availability) 17 August, 2023
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/package_manifest/ch01
Chapter 15. URL Handlers
Chapter 15. URL Handlers There are many contexts in Red Hat Fuse where you need to provide a URL to specify the location of a resource (for example, as the argument to a console command). In general, when specifying a URL, you can use any of the schemes supported by Fuse's built-in URL handlers. This appendix describes the syntax for all of the available URL handlers. 15.1. File URL Handler 15.1.1. Syntax A file URL has the syntax, file: PathName , where PathName is the relative or absolute pathname of a file that is available on the Classpath. The provided PathName is parsed by Java's built-in file URL handler . Hence, the PathName syntax is subject to the usual conventions of a Java pathname: in particular, on Windows, each backslash must either be escaped by another backslash or replaced by a forward slash. 15.1.2. Examples For example, consider the pathname, C:\Projects\camel-bundle\target\foo-1.0-SNAPSHOT.jar , on Windows. The following example shows the correct alternatives for the file URL on Windows: The following example shows some incorrect alternatives for the file URL on Windows: 15.2. HTTP URL Handler 15.2.1. Syntax A HTTP URL has the standard syntax, http: Host [: Port ]/[ Path ][# AnchorName ][? Query ] . You can also specify a secure HTTP URL using the https scheme. The provided HTTP URL is parsed by Java's built-in HTTP URL handler, so the HTTP URL behaves in the normal way for a Java application. 15.3. Mvn URL Handler 15.3.1. Overview If you use Maven to build your bundles or if you know that a particular bundle is available from a Maven repository, you can use the Mvn handler scheme to locate the bundle. Note To ensure that the Mvn URL handler can find local and remote Maven artifacts, you might find it necessary to customize the Mvn URL handler configuration. For details, see Section 15.3.5, "Configuring the Mvn URL handler" . 15.3.2. Syntax An Mvn URL has the following syntax: Where repositoryUrl optionally specifies the URL of a Maven repository. The groupId , artifactId , version , packaging , and classifier are the standard Maven coordinates for locating Maven artifacts. 15.3.3. Omitting coordinates When specifying an Mvn URL, only the groupId and the artifactId coordinates are required. The following examples reference a Maven bundle with the groupId , org.fusesource.example , and with the artifactId , bundle-demo : When the version is omitted, as in the first example, it defaults to LATEST , which resolves to the latest version based on the available Maven metadata. In order to specify a classifier value without specifying a packaging or a version value, it is permissible to leave gaps in the Mvn URL. Likewise, if you want to specify a packaging value without a version value. For example: 15.3.4. Specifying a version range When specifying the version value in an Mvn URL, you can specify a version range (using standard Maven version range syntax) in place of a simple version number. You use square brackets- [ and ] -to denote inclusive ranges and parentheses- ( and ) -to denote exclusive ranges. For example, the range, [1.0.4,2.0) , matches any version, v , that satisfies 1.0.4 ⇐ v < 2.0 . You can use this version range in an Mvn URL as follows: 15.3.5. Configuring the Mvn URL handler Before using Mvn URLs for the first time, you might need to customize the Mvn URL handler settings, as follows: Section 15.3.6, "Check the Mvn URL settings" . Section 15.3.7, "Edit the configuration file" . Section 15.3.8, "Customize the location of the local repository" . 15.3.6. Check the Mvn URL settings The Mvn URL handler resolves a reference to a local Maven repository and maintains a list of remote Maven repositories. When resolving an Mvn URL, the handler searches first the local repository and then the remote repositories in order to locate the specified Maven artifiact. If there is a problem with resolving an Mvn URL, the first thing you should do is to check the handler settings to see which local repository and remote repositories it is using to resolve URLs. To check the Mvn URL settings, enter the following commands at the console: The config:edit command switches the focus of the config utility to the properties belonging to the org.ops4j.pax.url.mvn persistent ID. The config:proplist command outputs all of the property settings for the current persistent ID. With the focus on org.ops4j.pax.url.mvn , you should see a listing similar to the following: Where the localRepository setting shows the local repository location currently used by the handler and the repositories setting shows the remote repository list currently used by the handler. 15.3.7. Edit the configuration file To customize the property settings for the Mvn URL handler, edit the following configuration file: The settings in this file enable you to specify explicitly the location of the local Maven repository, remove Maven repositories, Maven proxy server settings, and more. Please see the comments in the configuration file for more details about these settings. 15.3.8. Customize the location of the local repository In particular, if your local Maven repository is in a non-default location, you might find it necessary to configure it explicitly in order to access Maven artifacts that you build locally. In your org.ops4j.pax.url.mvn.cfg configuration file, uncomment the org.ops4j.pax.url.mvn.localRepository property and set it to the location of your local Maven repository. For example: 15.3.9. Reference For more details about the mvn URL syntax, see the original Pax URL Mvn Protocol documentation. 15.4. Wrap URL Handler 15.4.1. Overview If you need to reference a JAR file that is not already packaged as a bundle, you can use the Wrap URL handler to convert it dynamically. The implementation of the Wrap URL handler is based on Peter Krien's open source Bnd utility. 15.4.2. Syntax A Wrap URL has the following syntax: The locationURL can be any URL that locates a JAR (where the referenced JAR is not formatted as a bundle). The optional instructionsURL references a Bnd properties file that specifies how the bundle conversion is performed. The optional instructions is an ampersand, & , delimited list of Bnd properties that specify how the bundle conversion is performed. 15.4.3. Default instructions In most cases, the default Bnd instructions are adequate for wrapping an API JAR file. By default, Wrap adds manifest headers to the JAR's META-INF/Manifest.mf file as shown in Table 15.1, "Default Instructions for Wrapping a JAR" . Table 15.1. Default Instructions for Wrapping a JAR Manifest Header Default Value Import-Package *;resolution:=optional Export-Package All packages from the wrapped JAR. Bundle-SymbolicName The name of the JAR file, where any characters not in the set [a-zA-Z0-9_-] are replaced by underscore, _ . 15.4.4. Examples The following Wrap URL locates version 1.1 of the commons-logging JAR in a Maven repository and converts it to an OSGi bundle using the default Bnd properties: The following Wrap URL uses the Bnd properties from the file, E:\Data\Examples\commons-logging-1.1.bnd : The following Wrap URL specifies the Bundle-SymbolicName property and the Bundle-Version property explicitly: If the preceding URL is used as a command-line argument, it might be necessary to escape the dollar sign, \USD , to prevent it from being processed by the command line, as follows: 15.4.5. Reference For more details about the wrap URL handler, see the following references: The Bnd tool documentation , for more details about Bnd properties and Bnd instruction files. The original Pax URL Wrap Protocol documentation. 15.5. War URL Handler 15.5.1. Overview If you need to deploy a WAR file in an OSGi container, you can automatically add the requisite manifest headers to the WAR file by prefixing the WAR URL with war: , as described here. 15.5.2. Syntax A War URL is specified using either of the following syntaxes: The first syntax, using the war scheme, specifies a WAR file that is converted into a bundle using the default instructions. The warURL can be any URL that locates a WAR file. The second syntax, using the warref scheme, specifies a Bnd properties file, instructionsURL , that contains the conversion instructions (including some instructions that are specific to this handler). In this syntax, the location of the referenced WAR file does not appear explicitly in the URL. The WAR file is specified instead by the (mandatory) WAR-URL property in the properties file. 15.5.3. WAR-specific properties/instructions Some of the properties in the .bnd instructions file are specific to the War URL handler, as follows: WAR-URL (Mandatory) Specifies the location of the War file that is to be converted into a bundle. Web-ContextPath Specifies the piece of the URL path that is used to access this Web application, after it has been deployed inside the Web container. Note Earlier versions of PAX Web used the property, Webapp-Context , which is now deprecated . 15.5.4. Default instructions By default, the War URL handler adds manifest headers to the WAR's META-INF/Manifest.mf file as shown in Table 15.2, "Default Instructions for Wrapping a WAR File" . Table 15.2. Default Instructions for Wrapping a WAR File Manifest Header Default Value Import-Package javax. ,org.xml. ,org.w3c.* Export-Package No packages are exported. Bundle-SymbolicName The name of the WAR file, where any characters not in the set [a-zA-Z0-9_-\.] are replaced by period, . . Web-ContextPath No default value. But the WAR extender will use the value of Bundle-SymbolicName by default. Bundle-ClassPath In addition to any class path entries specified explicitly, the following entries are added automatically: . WEB-INF/classes All of the JARs from the WEB-INF/lib directory. 15.5.5. Examples The following War URL locates version 1.4.7 of the wicket-examples WAR in a Maven repository and converts it to an OSGi bundle using the default instructions: The following Wrap URL specifies the Web-ContextPath explicitly: The following War URL converts the WAR file referenced by the WAR-URL property in the wicket-examples-1.4.7.bnd file and then converts the WAR into an OSGi bundle using the other instructions in the .bnd file: 15.5.6. Reference For more details about the war URL syntax, see the original Pax URL War Protocol documentation.
[ "file:C:/Projects/camel-bundle/target/foo-1.0-SNAPSHOT.jar file:C:\\\\Projects\\\\camel-bundle\\\\target\\\\foo-1.0-SNAPSHOT.jar", "file:C:\\Projects\\camel-bundle\\target\\foo-1.0-SNAPSHOT.jar // WRONG! file://C:/Projects/camel-bundle/target/foo-1.0-SNAPSHOT.jar // WRONG! file://C:\\\\Projects\\\\camel-bundle\\\\target\\\\foo-1.0-SNAPSHOT.jar // WRONG!", "mvn:[ repositoryUrl !] groupId / artifactId [/[ version ][/[ packaging ][/[ classifier ]]]]", "mvn:org.fusesource.example/bundle-demo mvn:org.fusesource.example/bundle-demo/1.1", "mvn: groupId / artifactId /// classifier mvn: groupId / artifactId / version // classifier mvn: groupId / artifactId // packaging / classifier mvn: groupId / artifactId // packaging", "mvn:org.fusesource.example/bundle-demo/[1.0.4,2.0)", "JBossFuse:karaf@root> config:edit org.ops4j.pax.url.mvn JBossFuse:karaf@root> config:proplist", "org.ops4j.pax.url.mvn.defaultRepositories = file:/path/to/JBossFuse/jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/system@snapshots@id=karaf.system,file:/home/userid/.m2/repository@snapshots@id=local,file:/path/to/JBossFuse/jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/local-repo@snapshots@id=karaf.local-repo,file:/path/to/JBossFuse/jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/system@snapshots@id=child.karaf.system org.ops4j.pax.url.mvn.globalChecksumPolicy = warn org.ops4j.pax.url.mvn.globalUpdatePolicy = daily org.ops4j.pax.url.mvn.localRepository = /path/to/JBossFuse/jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/data/repository org.ops4j.pax.url.mvn.repositories = http://repo1.maven.org/maven2@id=maven.central.repo, https://maven.repository.redhat.com/ga@id=redhat.ga.repo, https://maven.repository.redhat.com/earlyaccess/all@id=redhat.ea.repo, https://repository.jboss.org/nexus/content/groups/ea@id=fuseearlyaccess org.ops4j.pax.url.mvn.settings = /path/to/jboss-fuse-7.13.0.fuse-7_13_0-00012-redhat-00001/etc/maven-settings.xml org.ops4j.pax.url.mvn.useFallbackRepositories = false service.pid = org.ops4j.pax.url.mvn", "InstallDir /etc/org.ops4j.pax.url.mvn.cfg", "Path to the local maven repository which is used to avoid downloading artifacts when they already exist locally. The value of this property will be extracted from the settings.xml file above, or defaulted to: System.getProperty( \"user.home\" ) + \"/.m2/repository\" # org.ops4j.pax.url.mvn.localRepository=file:E:/Data/.m2/repository", "wrap: locationURL [, instructionsURL ][USD instructions ]", "wrap:mvn:commons-logging/commons-logging/1.1", "wrap:mvn:commons-logging/commons-logging/1.1,file:E:/Data/Examples/commons-logging-1.1.bnd", "wrap:mvn:commons-logging/commons-logging/1.1USDBundle-SymbolicName=apache-comm-log&Bundle-Version=1.1", "wrap:mvn:commons-logging/commons-logging/1.1\\USDBundle-SymbolicName=apache-comm-log&Bundle-Version=1.1", "war: warURL warref: instructionsURL", "war:mvn:org.apache.wicket/wicket-examples/1.4.7/war", "war:mvn:org.apache.wicket/wicket-examples/1.4.7/war?Web-ContextPath=wicket", "warref:file:E:/Data/Examples/wicket-examples-1.4.7.bnd" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/urlhandlers
Chapter 13. Finding and cleaning stale subvolumes (Technology Preview)
Chapter 13. Finding and cleaning stale subvolumes (Technology Preview) Sometimes stale subvolumes don't have a respective k8s reference attached. These subvolumes are of no use and can be deleted. You can find and delete stale subvolumes using the ODF CLI tool. Important Deleting stale subvolumes using the ODF CLI tool is a Technology Preview feature. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information, see Technology Preview Features Support Scope . Prerequisites Download the OpenShift Data Foundation command line interface (CLI) tool. With the Data Foundation CLI tool, you can effectively manage and troubleshoot your Data Foundation environment from a terminal. You can find a compatible version and download the CLI tool from the customer portal . Procedure Find the stale subvolumes by using the --stale flag with the subvolumes command: Example output: Delete the stale subvolumes: Replace <subvolumes> with a comma separated list of subvolumes from the output of the first command. The subvolumes must be of the same filesystem and subvolumegroup. Replace <filesystem> and <subvolumegroup> with the filesystem and subvolumegroup from the output of the first command. For example: Example output:
[ "odf subvolume ls --stale", "Filesystem Subvolume Subvolumegroup State ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110004 csi stale ocs-storagecluster-cephfilesystem csi-vol-427774b4-340b-11ed-8d66-0242ac110005 csi stale", "odf subvolume delete <subvolumes> <filesystem> <subvolumegroup>", "odf subvolume delete csi-vol-427774b4-340b-11ed-8d66-0242ac110004,csi-vol-427774b4-340b-11ed-8d66-0242ac110005 ocs-storagecluster csi", "Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted Info: subvolume csi-vol-427774b4-340b-11ed-8d66-0242ac110004 deleted" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_and_allocating_storage_resources/finding-and-cleaning-subvolumes_rhodf
Chapter 2. Creating a mirror registry with mirror registry for Red Hat OpenShift
Chapter 2. Creating a mirror registry with mirror registry for Red Hat OpenShift The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. If you already have a container image registry, such as Red Hat Quay, you can skip this section and go straight to Mirroring the OpenShift Container Platform image repository . 2.1. Prerequisites An OpenShift Container Platform subscription. Red Hat Enterprise Linux (RHEL) 8 and 9 with Podman 3.4.2 or later and OpenSSL installed. Fully qualified domain name for the Red Hat Quay service, which must resolve through a DNS server. Key-based SSH connectivity on the target host. SSH keys are automatically generated for local installs. For remote hosts, you must generate your own SSH keys. 2 or more vCPUs. 8 GB of RAM. About 12 GB for OpenShift Container Platform 4.14 release images, or about 358 GB for OpenShift Container Platform 4.14 release images and OpenShift Container Platform 4.14 Red Hat Operator images. Up to 1 TB per stream or more is suggested. Important These requirements are based on local testing results with only release images and Operator images. Storage requirements can vary based on your organization's needs. You might require more space, for example, when you mirror multiple z-streams. You can use standard Red Hat Quay functionality or the proper API callout to remove unnecessary images and free up space. 2.2. Mirror registry for Red Hat OpenShift introduction For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift subscription. It is available for download on the OpenShift console Downloads page. The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with preconfigured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started. The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments. Use of the mirror registry for Red Hat OpenShift is optional if another container registry is already available in the install environment. 2.2.1. Mirror registry for Red Hat OpenShift limitations The following limitations apply to the mirror registry for Red Hat OpenShift : The mirror registry for Red Hat OpenShift is not a highly-available registry and only local file system storage is supported. It is not intended to replace Red Hat Quay or the internal image registry for OpenShift Container Platform. The mirror registry for Red Hat OpenShift is not intended to be a substitute for a production deployment of Red Hat Quay. The mirror registry for Red Hat OpenShift is only supported for hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as Release images or Red Hat Operator images. It uses local storage on your Red Hat Enterprise Linux (RHEL) machine, and storage supported by RHEL is supported by the mirror registry for Red Hat OpenShift . Note Because the mirror registry for Red Hat OpenShift uses local storage, you should remain aware of the storage usage consumed when mirroring images and use Red Hat Quay's garbage collection feature to mitigate potential issues. For more information about this feature, see "Red Hat Quay garbage collection". Support for Red Hat product images that are pushed to the mirror registry for Red Hat OpenShift for bootstrapping purposes are covered by valid subscriptions for each respective product. A list of exceptions to further enable the bootstrap experience can be found on the Self-managed Red Hat OpenShift sizing and subscription guide . Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift . Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to leverage the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly-available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters. 2.3. Mirroring on a local host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a local host using the mirror-registry installer tool. By doing so, users can create a local host registry running on port 443 for the purpose of storing a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a USDHOME/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the registry by running the following command: USD podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 2.4. Updating mirror registry for Red Hat OpenShift from a local host This procedure explains how to update the mirror registry for Red Hat OpenShift from a local host using the upgrade command. Updating to the latest version ensures new features, bug fixes, and security vulnerability fixes. Important When upgrading from version 1 to version 2, be aware of the following constraints: The worker count is set to 1 because multiple writes are not allowed in SQLite. You must not use the mirror registry for Red Hat OpenShift user interface (UP). Do not access the sqlite-storage Podman volume during the upgrade. There is intermittent downtime of your mirror registry because it is restarted during the upgrade process. PostgreSQL data is backed up under the /USDHOME/quay-instal/quay-postgres-backup/ directory for recovery. Prerequisites You have installed the mirror registry for Red Hat OpenShift on a local host. Procedure If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y, and your installation directory is the default at /etc/quay-install , you can enter the following command: USD sudo ./mirror-registry upgrade -v Note mirror registry for Red Hat OpenShift migrates Podman volumes for Quay storage, Postgres data, and /etc/quay-install data to the new USDHOME/quay-install location. This allows you to use mirror registry for Red Hat OpenShift without the --quayRoot flag during future upgrades. Users who upgrade mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade -v flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and you used a custom quay configuration and storage directory in your 1.y deployment, you must pass in the --quayRoot and --quayStorage flags. For example: USD sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --quayStorage <example_directory_name>/quay-storage -v If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and want to specify a custom SQLite storage path, you must pass in the --sqliteStorage flag, for example: USD sudo ./mirror-registry upgrade --sqliteStorage <example_directory_name>/sqlite-storage -v 2.5. Mirroring on a remote host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a remote host using the mirror-registry tool. By doing so, users can create a registry to hold a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a USDHOME/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD ./mirror-registry install -v \ --targetHostname <host_example_com> \ --targetUsername <example_user> \ -k ~/.ssh/my_ssh_key \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the mirror registry by running the following command: USD podman login -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 2.6. Updating mirror registry for Red Hat OpenShift from a remote host This procedure explains how to update the mirror registry for Red Hat OpenShift from a remote host using the upgrade command. Updating to the latest version ensures bug fixes and security vulnerability fixes. Important When upgrading from version 1 to version 2, be aware of the following constraints: The worker count is set to 1 because multiple writes are not allowed in SQLite. You must not use the mirror registry for Red Hat OpenShift user interface (UP). Do not access the sqlite-storage Podman volume during the upgrade. There is intermittent downtime of your mirror registry because it is restarted during the upgrade process. PostgreSQL data is backed up under the /USDHOME/quay-instal/quay-postgres-backup/ directory for recovery. Prerequisites You have installed the mirror registry for Red Hat OpenShift on a remote host. Procedure To upgrade the mirror registry for Red Hat OpenShift from a remote host, enter the following command: USD ./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key Note Users who upgrade the mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade -v flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. If you are upgrading the mirror registry for Red Hat OpenShift from 1.3 2.y and want to specify a custom SQLite storage path, you must pass in the --sqliteStorage flag, for example: USD ./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key --sqliteStorage <example_directory_name>/quay-storage 2.7. Replacing mirror registry for Red Hat OpenShift SSL/TLS certificates In some cases, you might want to update your SSL/TLS certificates for the mirror registry for Red Hat OpenShift . This is useful in the following scenarios: If you are replacing the current mirror registry for Red Hat OpenShift certificate. If you are using the same certificate as the mirror registry for Red Hat OpenShift installation. If you are periodically updating the mirror registry for Red Hat OpenShift certificate. Use the following procedure to replace mirror registry for Red Hat OpenShift SSL/TLS certificates. Prerequisites You have downloaded the ./mirror-registry binary from the OpenShift console Downloads page. Procedure Enter the following command to install the mirror registry for Red Hat OpenShift : USD ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> This installs the mirror registry for Red Hat OpenShift to the USDHOME/quay-install directory. Prepare a new certificate authority (CA) bundle and generate new ssl.key and ssl.crt key files. For more information, see Using SSL/TLS to protect connections to Red Hat Quay . Assign /USDHOME/quay-install an environment variable, for example, QUAY , by entering the following command: USD export QUAY=/USDHOME/quay-install Copy the new ssl.crt file to the /USDHOME/quay-install directory by entering the following command: USD cp ~/ssl.crt USDQUAY/quay-config Copy the new ssl.key file to the /USDHOME/quay-install directory by entering the following command: USD cp ~/ssl.key USDQUAY/quay-config Restart the quay-app application pod by entering the following command: USD systemctl --user restart quay-app 2.8. Uninstalling the mirror registry for Red Hat OpenShift You can uninstall the mirror registry for Red Hat OpenShift from your local host by running the following command: USD ./mirror-registry uninstall -v \ --quayRoot <example_directory_name> Note Deleting the mirror registry for Red Hat OpenShift will prompt the user before deletion. You can use --autoApprove to skip this prompt. Users who install the mirror registry for Red Hat OpenShift with the --quayRoot flag must include the --quayRoot flag when uninstalling. For example, if you installed the mirror registry for Red Hat OpenShift with --quayRoot example_directory_name , you must include that string to properly uninstall the mirror registry. 2.9. Mirror registry for Red Hat OpenShift flags The following flags are available for the mirror registry for Red Hat OpenShift : Flags Description --autoApprove A boolean value that disables interactive prompts. If set to true , the quayRoot directory is automatically deleted when uninstalling the mirror registry. Defaults to false if left unspecified. --initPassword The password of the init user created during Quay installation. Must be at least eight characters and contain no whitespace. --initUser string Shows the username of the initial user. Defaults to init if left unspecified. --no-color , -c Allows users to disable color sequences and propagate that to Ansible when running install, uninstall, and upgrade commands. --quayHostname The fully-qualified domain name of the mirror registry that clients will use to contact the registry. Equivalent to SERVER_HOSTNAME in the Quay config.yaml . Must resolve by DNS. Defaults to <targetHostname>:8443 if left unspecified. [1] --quayStorage The folder where Quay persistent storage data is saved. Defaults to the quay-storage Podman volume. Root privileges are required to uninstall. --quayRoot , -r The directory where container image layer and configuration data is saved, including rootCA.key , rootCA.pem , and rootCA.srl certificates. Defaults to USDHOME/quay-install if left unspecified. --sqliteStorage The folder where SQLite database data is saved. Defaults to sqlite-storage Podman volume if not specified. Root is required to uninstall. --ssh-key , -k The path of your SSH identity key. Defaults to ~/.ssh/quay_installer if left unspecified. --sslCert The path to the SSL/TLS public key / certificate. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --sslCheckSkip Skips the check for the certificate hostname against the SERVER_HOSTNAME in the config.yaml file. [2] --sslKey The path to the SSL/TLS private key used for HTTPS communication. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --targetHostname , -H The hostname of the target you want to install Quay to. Defaults to USDHOST , for example, a local host, if left unspecified. --targetUsername , -u The user on the target host which will be used for SSH. Defaults to USDUSER , for example, the current user if left unspecified. --verbose , -v Shows debug logs and Ansible playbook outputs. --version Shows the version for the mirror registry for Red Hat OpenShift . --quayHostname must be modified if the public DNS name of your system is different from the local hostname. Additionally, the --quayHostname flag does not support installation with an IP address. Installation with a hostname is required. --sslCheckSkip is used in cases when the mirror registry is set behind a proxy and the exposed hostname is different from the internal Quay hostname. It can also be used when users do not want the certificates to be validated against the provided Quay hostname during installation. 2.10. Mirror registry for Red Hat OpenShift release notes The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. These release notes track the development of the mirror registry for Red Hat OpenShift in OpenShift Container Platform. 2.10.1. Mirror registry for Red Hat OpenShift 2.0 release notes The following sections provide details for each 2.0 release of the mirror registry for Red Hat OpenShift. 2.10.1.1. Mirror registry for Red Hat OpenShift 2.0.5 Issued: 13 January 2025 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.5. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2025:0298 - mirror registry for Red Hat OpenShift 2.0.5 2.10.1.2. Mirror registry for Red Hat OpenShift 2.0.4 Issued: 06 January 2025 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.4. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2025:0033 - mirror registry for Red Hat OpenShift 2.0.4 2.10.1.3. Mirror registry for Red Hat OpenShift 2.0.3 Issued: 25 November 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.3. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:10181 - mirror registry for Red Hat OpenShift 2.0.3 2.10.1.4. Mirror registry for Red Hat OpenShift 2.0.2 Issued: 31 October 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.2. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:8370 - mirror registry for Red Hat OpenShift 2.0.2 2.10.1.5. Mirror registry for Red Hat OpenShift 2.0.1 Issued: 26 September 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.1. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:7070 - mirror registry for Red Hat OpenShift 2.0.1 2.10.1.6. Mirror registry for Red Hat OpenShift 2.0.0 Issued: 03 September 2024 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.12.0. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:5277 - mirror registry for Red Hat OpenShift 2.0.0 2.10.1.6.1. New features With the release of mirror registry for Red Hat OpenShift , the internal database has been upgraded from PostgreSQL to SQLite. As a result, data is now stored on the sqlite-storage Podman volume by default, and the overall tarball size is reduced by 300 MB. New installations use SQLite by default. Before upgrading to version 2.0, see "Updating mirror registry for Red Hat OpenShift from a local host" or "Updating mirror registry for Red Hat OpenShift from a remote host" depending on your environment. A new feature flag, --sqliteStorage has been added. With this flag, you can manually set the location where SQLite database data is saved. Mirror registry for Red Hat OpenShift is now available on IBM Power and IBM Z architectures ( s390x and ppc64le ). 2.10.2. Mirror registry for Red Hat OpenShift 1.3 release notes The following sections provide details for each 1.3.z release of the mirror registry for Red Hat OpenShift 2.10.2.1. Mirror registry for Red Hat OpenShift 1.3.11 Issued: 2024-04-23 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.15. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2024:1758 - mirror registry for Red Hat OpenShift 1.3.11 2.10.2.2. Mirror registry for Red Hat OpenShift 1.3.10 Issued: 2023-12-07 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.14. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:7628 - mirror registry for Red Hat OpenShift 1.3.10 2.10.2.3. Mirror registry for Red Hat OpenShift 1.3.9 Issued: 2023-09-19 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.12. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:5241 - mirror registry for Red Hat OpenShift 1.3.9 2.10.2.4. Mirror registry for Red Hat OpenShift 1.3.8 Issued: 2023-08-16 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.11. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:4622 - mirror registry for Red Hat OpenShift 1.3.8 2.10.2.5. Mirror registry for Red Hat OpenShift 1.3.7 Issued: 2023-07-19 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.10. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:4087 - mirror registry for Red Hat OpenShift 1.3.7 2.10.2.6. Mirror registry for Red Hat OpenShift 1.3.6 Issued: 2023-05-30 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.8. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:3302 - mirror registry for Red Hat OpenShift 1.3.6 2.10.2.7. Mirror registry for Red Hat OpenShift 1.3.5 Issued: 2023-05-18 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.7. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:3225 - mirror registry for Red Hat OpenShift 1.3.5 2.10.2.8. Mirror registry for Red Hat OpenShift 1.3.4 Issued: 2023-04-25 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.6. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1914 - mirror registry for Red Hat OpenShift 1.3.4 2.10.2.9. Mirror registry for Red Hat OpenShift 1.3.3 Issued: 2023-04-05 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.5. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1528 - mirror registry for Red Hat OpenShift 1.3.3 2.10.2.10. Mirror registry for Red Hat OpenShift 1.3.2 Issued: 2023-03-21 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.4. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1376 - mirror registry for Red Hat OpenShift 1.3.2 2.10.2.11. Mirror registry for Red Hat OpenShift 1.3.1 Issued: 2023-03-7 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.3. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:1086 - mirror registry for Red Hat OpenShift 1.3.1 2.10.2.12. Mirror registry for Red Hat OpenShift 1.3.0 Issued: 2023-02-20 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.8.1. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2023:0558 - mirror registry for Red Hat OpenShift 1.3.0 2.10.2.12.1. New features Mirror registry for Red Hat OpenShift is now supported on Red Hat Enterprise Linux (RHEL) 9 installations. IPv6 support is now available on mirror registry for Red Hat OpenShift local host installations. IPv6 is currently unsupported on mirror registry for Red Hat OpenShift remote host installations. A new feature flag, --quayStorage , has been added. By specifying this flag, you can manually set the location for the Quay persistent storage. A new feature flag, --pgStorage , has been added. By specifying this flag, you can manually set the location for the Postgres persistent storage. Previously, users were required to have root privileges ( sudo ) to install mirror registry for Red Hat OpenShift . With this update, sudo is no longer required to install mirror registry for Red Hat OpenShift . When mirror registry for Red Hat OpenShift was installed with sudo , an /etc/quay-install directory that contained installation files, local storage, and the configuration bundle was created. With the removal of the sudo requirement, installation files and the configuration bundle are now installed to USDHOME/quay-install . Local storage, for example Postgres and Quay, are now stored in named volumes automatically created by Podman. To override the default directories that these files are stored in, you can use the command line arguments for mirror registry for Red Hat OpenShift . For more information about mirror registry for Red Hat OpenShift command line arguments, see " Mirror registry for Red Hat OpenShift flags". 2.10.2.12.2. Bug fixes Previously, the following error could be returned when attempting to uninstall mirror registry for Red Hat OpenShift : ["Error: no container with name or ID \"quay-postgres\" found: no such container"], "stdout": "", "stdout_lines": [] * . With this update, the order that mirror registry for Red Hat OpenShift services are stopped and uninstalled have been changed so that the error no longer occurs when uninstalling mirror registry for Red Hat OpenShift . For more information, see PROJQUAY-4629 . 2.10.3. Mirror registry for Red Hat OpenShift 1.2 release notes The following sections provide details for each 1.2.z release of the mirror registry for Red Hat OpenShift 2.10.3.1. Mirror registry for Red Hat OpenShift 1.2.9 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.10. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:7369 - mirror registry for Red Hat OpenShift 1.2.9 2.10.3.2. Mirror registry for Red Hat OpenShift 1.2.8 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.9. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:7065 - mirror registry for Red Hat OpenShift 1.2.8 2.10.3.3. Mirror registry for Red Hat OpenShift 1.2.7 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.8. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:6500 - mirror registry for Red Hat OpenShift 1.2.7 2.10.3.3.1. Bug fixes Previously, getFQDN() relied on the fully-qualified domain name (FQDN) library to determine its FQDN, and the FQDN library tried to read the /etc/hosts folder directly. Consequently, on some Red Hat Enterprise Linux CoreOS (RHCOS) installations with uncommon DNS configurations, the FQDN library would fail to install and abort the installation. With this update, mirror registry for Red Hat OpenShift uses hostname to determine the FQDN. As a result, the FQDN library does not fail to install. ( PROJQUAY-4139 ) 2.10.3.4. Mirror registry for Red Hat OpenShift 1.2.6 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.7. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:6278 - mirror registry for Red Hat OpenShift 1.2.6 2.10.3.4.1. New features A new feature flag, --no-color ( -c ) has been added. This feature flag allows users to disable color sequences and propagate that to Ansible when running install, uninstall, and upgrade commands. 2.10.3.5. Mirror registry for Red Hat OpenShift 1.2.5 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.6. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:6071 - mirror registry for Red Hat OpenShift 1.2.5 2.10.3.6. Mirror registry for Red Hat OpenShift 1.2.4 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.5. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:5884 - mirror registry for Red Hat OpenShift 1.2.4 2.10.3.7. Mirror registry for Red Hat OpenShift 1.2.3 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.4. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:5649 - mirror registry for Red Hat OpenShift 1.2.3 2.10.3.8. Mirror registry for Red Hat OpenShift 1.2.2 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.3. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:5501 - mirror registry for Red Hat OpenShift 1.2.2 2.10.3.9. Mirror registry for Red Hat OpenShift 1.2.1 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.2. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:4986 - mirror registry for Red Hat OpenShift 1.2.1 2.10.3.10. Mirror registry for Red Hat OpenShift 1.2.0 Mirror registry for Red Hat OpenShift is now available with Red Hat Quay 3.7.1. The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:4986 - mirror registry for Red Hat OpenShift 1.2.0 2.10.3.10.1. Bug fixes Previously, all components and workers running inside of the Quay pod Operator had log levels set to DEBUG . As a result, large traffic logs were created that consumed unnecessary space. With this update, log levels are set to WARN by default, which reduces traffic information while emphasizing problem scenarios. ( PROJQUAY-3504 ) 2.10.4. Mirror registry for Red Hat OpenShift 1.1 release notes The following section provides details 1.1.0 release of the mirror registry for Red Hat OpenShift 2.10.4.1. Mirror registry for Red Hat OpenShift 1.1.0 The following advisory is available for the mirror registry for Red Hat OpenShift : RHBA-2022:0956 - mirror registry for Red Hat OpenShift 1.1.0 2.10.4.1.1. New features A new command, mirror-registry upgrade has been added. This command upgrades all container images without interfering with configurations or data. Note If quayRoot was previously set to something other than default, it must be passed into the upgrade command. 2.10.4.1.2. Bug fixes Previously, the absence of quayHostname or targetHostname did not default to the local hostname. With this update, quayHostname and targetHostname now default to the local hostname if they are missing. ( PROJQUAY-3079 ) Previously, the command ./mirror-registry --version returned an unknown flag error. Now, running ./mirror-registry --version returns the current version of the mirror registry for Red Hat OpenShift . ( PROJQUAY-3086 ) Previously, users could not set a password during installation, for example, when running ./mirror-registry install --initUser <user_name> --initPassword <password> --verbose . With this update, users can set a password during installation. ( PROJQUAY-3149 ) Previously, the mirror registry for Red Hat OpenShift did not recreate pods if they were destroyed. Now, pods are recreated if they are destroyed. ( PROJQUAY-3261 ) 2.11. Troubleshooting mirror registry for Red Hat OpenShift To assist in troubleshooting mirror registry for Red Hat OpenShift , you can gather logs of systemd services installed by the mirror registry. The following services are installed: quay-app.service quay-postgres.service quay-redis.service quay-pod.service Prerequisites You have installed mirror registry for Red Hat OpenShift . Procedure If you installed mirror registry for Red Hat OpenShift with root privileges, you can get the status information of its systemd services by entering the following command: USD sudo systemctl status <service> If you installed mirror registry for Red Hat OpenShift as a standard user, you can get the status information of its systemd services by entering the following command: USD systemctl --user status <service> 2.12. Additional resources Red Hat Quay garbage collection Using SSL to protect connections to Red Hat Quay Configuring the system to trust the certificate authority Mirroring the OpenShift Container Platform image repository Mirroring Operator catalogs for use with disconnected clusters
[ "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "sudo ./mirror-registry upgrade -v", "sudo ./mirror-registry upgrade --quayHostname <host_example_com> --quayRoot <example_directory_name> --quayStorage <example_directory_name>/quay-storage -v", "sudo ./mirror-registry upgrade --sqliteStorage <example_directory_name>/sqlite-storage -v", "./mirror-registry install -v --targetHostname <host_example_com> --targetUsername <example_user> -k ~/.ssh/my_ssh_key --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key", "./mirror-registry upgrade -v --targetHostname <remote_host_url> --targetUsername <user_name> -k ~/.ssh/my_ssh_key --sqliteStorage <example_directory_name>/quay-storage", "./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "export QUAY=/USDHOME/quay-install", "cp ~/ssl.crt USDQUAY/quay-config", "cp ~/ssl.key USDQUAY/quay-config", "systemctl --user restart quay-app", "./mirror-registry uninstall -v --quayRoot <example_directory_name>", "sudo systemctl status <service>", "systemctl --user status <service>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/disconnected_installation_mirroring/installing-mirroring-creating-registry
Chapter 8. Managing pools on the Ceph dashboard
Chapter 8. Managing pools on the Ceph dashboard As a storage administrator, you can create, edit, and delete pools on the Red Hat Ceph Storage dashboard. This section covers the following administrative tasks: Creating pools on the Ceph dashboard . Editing pools on the Ceph dashboard . Deleting pools on the Ceph dashboard . 8.1. Creating pools on the Ceph dashboard When you deploy a storage cluster without creating a pool, Ceph uses the default pools for storing data. You can create pools to logically partition your storage objects on the Red Hat Ceph Storage dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Procedure Log in to the dashboard. On the navigation menu, click Pools . Click Create . In the Create Pool window, set the following parameters: Figure 8.1. Creating pools Set the name of the pool and select the pool type. Select either replicated or Erasure Coded (EC) pool type. Set the Placement Group (PG) number. Optional: If using a replicated pool type, set the replicated size. Optional: If using an EC pool type configure the following additional settings. Optional: To see the settings for the currently selected EC profile, click the question mark. Optional: Add a new EC profile by clicking the plus symbol. Optional: Click the pencil symbol to select an application for the pool. Optional: Set the CRUSH rule, if applicable. Optional: If compression is required, select passive , aggressive , or force . Optional: Set the Quotas. Optional: Set the Quality of Service configuration. Click Create Pool . You get a notification that the pool was created successfully. Additional Resources For more information, see Ceph pools section in the Red Hat Ceph Storage Architecture Guide for more details. 8.2. Editing pools on the Ceph dashboard You can edit the pools on the Red Hat Ceph Storage Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. A pool is created. Procedure Log in to the dashboard. On the navigation menu, click Pools . To edit the pool, click its row. Select Edit In the Edit drop-down. In the Edit Pool window, edit the required parameters and click Edit Pool : Figure 8.2. Editing pools You get a notification that the pool was created successfully. Additional Resources See the Ceph pools in the Red Hat Ceph Storage Architecture Guide for more information. See the Pool values in the Red Hat Ceph Storage Storage Strategies Guide for more information on Compression Modes. 8.3. Deleting pools on the Ceph dashboard You can delete the pools on the Red Hat Ceph Storage Dashboard. Ensure that value of mon_allow_pool_delete is set to True in Manager modules. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. A pool is created. Procedure Log in to the dashboard. On the navigation bar, in Cluster drop-down menu, click Configuration . In the Level drop-down menu, select Advanced : Search for mon_allow_pool_delete , click Edit Set all the values to true : Figure 8.3. Configuration to delete pools On the navigation bar, click Pools : To delete the pool, click on its row: From Edit drop-down menu, select Delete . In the Delete Pool window, Click the Yes, I am sure box and then Click Delete Pool to save the settings: Figure 8.4. Delete pools Additional Resources See the Ceph pools in the Red Hat Ceph Storage Architecture Guide for more information. See the Pool values in the Red Hat Ceph Storage Storage Strategies Guide for more information on Compression Modes.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/dashboard_guide/management-of-pools-on-the-ceph-dashboard
Appendix D. Glossary of terms
Appendix D. Glossary of terms D.1. Virtualization terms Administration Portal A web user interface provided by Red Hat Virtualization Manager, based on the oVirt engine web user interface. It allows administrators to manage and monitor cluster resources like networks, storage domains, and virtual machine templates. Hosted Engine The instance of Red Hat Virtualization Manager that manages RHHI for Virtualization. Hosted Engine virtual machine The virtual machine that acts as Red Hat Virtualization Manager. The Hosted Engine virtual machine runs on a virtualization host that is managed by the instance of Red Hat Virtualization Manager that is running on the Hosted Engine virtual machine. Manager node A virtualization host that runs Red Hat Virtualization Manager directly, rather than running it in a Hosted Engine virtual machine. Red Hat Enterprise Linux host A physical machine installed with Red Hat Enterprise Linux plus additional packages to provide the same capabilities as a Red Hat Virtualization host. This type of host is not supported for use with RHHI for Virtualization. Red Hat Virtualization An operating system and management interface for virtualizing resources, processes, and applications for Linux and Microsoft Windows workloads. Red Hat Virtualization host A physical machine installed with Red Hat Virtualization that provides the physical resources to support the virtualization of resources, processes, and applications for Linux and Microsoft Windows workloads. This is the only type of host supported with RHHI for Virtualization. Red Hat Virtualization Manager A server that runs the management and monitoring capabilities of Red Hat Virtualization. Self-Hosted Engine node A virtualization host that contains the Hosted Engine virtual machine. All hosts in a RHHI for Virtualization deployment are capable of becoming Self-Hosted Engine nodes, but there is only one Self-Hosted Engine node at a time. storage domain A named collection of images, templates, snapshots, and metadata. A storage domain can be comprised of block devices or file systems. Storage domains are attached to data centers in order to provide access to the collection of images, templates, and so on to hosts in the data center. virtualization host A physical machine with the ability to virtualize physical resources, processes, and applications for client access. VM Portal A web user interface provided by Red Hat Virtualization Manager. It allows users to manage and monitor virtual machines. D.2. Storage terms brick An exported directory on a server in a trusted storage pool. cache logical volume A small, fast logical volume used to improve the performance of a large, slow logical volume. geo-replication One way asynchronous replication of data from a source Gluster volume to a target volume. Geo-replication works across local and wide area networks as well as the Internet. The target volume can be a Gluster volume in a different trusted storage pool, or another type of storage. gluster volume A logical group of bricks that can be configured to distribute, replicate, or disperse data according to workload requirements. logical volume management (LVM) A method of combining physical disks into larger virtual partitions. Physical volumes are placed in volume groups to form a pool of storage that can be divided into logical volumes as needed. Red Hat Gluster Storage An operating system based on Red Hat Enterprise Linux with additional packages that provide support for distributed, software-defined storage. source volume The Gluster volume that data is being copied from during geo-replication. storage host A physical machine that provides storage for client access. target volume The Gluster volume or other storage volume that data is being copied to during geo-replication. thin provisioning Provisioning storage such that only the space that is required is allocated at creation time, with further space being allocated dynamically according to need over time. thick provisioning Provisioning storage such that all space is allocated at creation time, regardless of whether that space is required immediately. trusted storage pool A group of Red Hat Gluster Storage servers that recognise each other as trusted peers. D.3. Hyperconverged Infrastructure terms Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization RHHI for Virtualization is a single product that provides both virtual compute and virtual storage resources. Red Hat Virtualization and Red Hat Gluster Storage are installed in a converged configuration, where the services of both products are available on each physical machine in a cluster. hyperconverged host A physical machine that provides physical storage, which is virtualized and consumed by virtualized processes and applications run on the same host. All hosts installed with RHHI for Virtualization are hyperconverged hosts. Web Console The web user interface for deploying, managing, and monitoring RHHI for Virtualization. The Web Console is provided by the Web Console service and plugins for Red Hat Virtualization Manager.
null
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/glossary-of-terms
Chapter 7. OLM v1
Chapter 7. OLM v1 7.1. About Operator Lifecycle Manager v1 (Technology Preview) Operator Lifecycle Manager (OLM) has been included with OpenShift Container Platform 4 since its initial release. OpenShift Container Platform 4.18 includes components for a -generation iteration of OLM as a Generally Available (GA) feature, known during this phase as OLM v1 . This updated framework evolves many of the concepts that have been part of versions of OLM and adds new capabilities. Starting in OpenShift Container Platform 4.17, documentation for OLM v1 has been moved to the following new guide: Extensions (OLM v1)
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/operators/olm-v1
Chapter 2. Embedded caches
Chapter 2. Embedded caches Add Data Grid as a dependency to your Java project and use embedded caches that increase application performance and give you capabilities to handle complex use cases. 2.1. Embedded cache tutorials You can run embedded cache tutorials directly in your IDE or from the command line as follows: USD mvn -s /path/to/maven-settings.xml clean package exec:exec Tutorial link Description Distributed caches Demonstrates how Distributed Caches work. Replicated caches Demonstrates how Replicated Caches work. Invalidated caches Demonstrates how Invalidated Caches work. Transactions Demonstrates how transactions work. Streams Demonstrates how Distributed Streams work. JCache integration Demonstrates how JCache works. Functional Maps Demonstrates how Functional Map API works. Map API Demonstrates how the Map API works with Data Grid caches. Multimap Demonstrates how to use Multimap. Queries Uses Data Grid Query to perform full-text queries on cache values. Clustered Listeners Detects when data changes in an embedded cache with Clustered Listeners. Counters Demonstrates how to use an embedded Clustered Counter. Clustered Locks Demonstrates how to use an embedded Clustered Lock. Clustered execution Demonstrates how to use an embedded Clustered Counter. Data Grid documentation You can find more resources about embedded caches in our documentation at: Embedding Data Grid Caches Querying Data Grid caches 2.2. Kubernetes and Openshift tutorial This tutorial contains instructions on how to run Infinispan library mode (as a microservice) in Kubernetes/OpenShift. Prerequisites: Maven and Docker daemon running in the background. Prerequisites A running Openshift or Kubernetes cluster Building the tutorial This tutorial is built using the maven command: mvn package Note that target/ directory contains additional directories like docker (with generated Dockerfile) and classes/META-INF/jkube with Kubernetes and OpenShift deployment templates. Tip If the Docker Daemon is down, the build will omit processing Dockerfiles. Use docker profile to turn it on manually. Deploying the tutorial to Kubernetes This is handle by the JKube maven plugin, just invoke: mvn k8s:build k8s:push k8s:resource k8s:apply -Doptions.image=<IMAGE_NAME> 1 1 IMAGE_NAME must be replaced with the FQN of the container to deploy to Kubernetes. This container must be created in a repository that you have permissions to push to and is accessible from within your Kubernetes cluster. Viewing and scaling up Everything should be up and running at this point. Now login into the OpenShift or Kubernetes cluster and scale the application kubectl scale --replicas=3 deployment/USD(kubectl get rs --namespace=myproject | grep infinispan | awk '{print USD1}') --namespace=myproject Undeploying the tutorial This is handled by the JKube maven plugin, just invoke:
[ "mvn -s /path/to/maven-settings.xml clean package exec:exec", "mvn package", "mvn k8s:build k8s:push k8s:resource k8s:apply -Doptions.image=<IMAGE_NAME> 1", "scale --replicas=3 deployment/USD(kubectl get rs --namespace=myproject | grep infinispan | awk '{print USD1}') --namespace=myproject", "mvn k8s:undeploy" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_code_tutorials/embedded-tutorials
Chapter 2. access
Chapter 2. access This chapter describes the commands under the access command. 2.1. access rule delete Delete access rule(s) Usage: Table 2.1. Positional arguments Value Summary <access-rule> Access rule id(s) to delete Table 2.2. Command arguments Value Summary -h, --help Show this help message and exit 2.2. access rule list List access rules Usage: Table 2.3. Command arguments Value Summary -h, --help Show this help message and exit --user <user> User whose access rules to list (name or id) --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 2.4. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 2.5. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 2.6. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 2.7. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 2.3. access rule show Display access rule details Usage: Table 2.8. Positional arguments Value Summary <access-rule> Access rule id to display Table 2.9. Command arguments Value Summary -h, --help Show this help message and exit Table 2.10. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 2.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 2.12. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 2.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 2.4. access token create Create an access token Usage: Table 2.14. Command arguments Value Summary -h, --help Show this help message and exit --consumer-key <consumer-key> Consumer key (required) --consumer-secret <consumer-secret> Consumer secret (required) --request-key <request-key> Request token to exchange for access token (required) --request-secret <request-secret> Secret associated with <request-key> (required) --verifier <verifier> Verifier associated with <request-key> (required) Table 2.15. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 2.16. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 2.17. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 2.18. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack access rule delete [-h] <access-rule> [<access-rule> ...]", "openstack access rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--user <user>] [--user-domain <user-domain>]", "openstack access rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <access-rule>", "openstack access token create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --consumer-key <consumer-key> --consumer-secret <consumer-secret> --request-key <request-key> --request-secret <request-secret> --verifier <verifier>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/access
Chapter 6. PriorityClass [scheduling.k8s.io/v1]
Chapter 6. PriorityClass [scheduling.k8s.io/v1] Description PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer. Type object Required value 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources description string description is an arbitrary string that usually provides guidelines on when this priority class should be used. globalDefault boolean globalDefault specifies whether this PriorityClass should be considered as the default priority for pods that do not have any priority class. Only one PriorityClass can be marked as globalDefault . However, if more than one PriorityClasses exists with their globalDefault field set to true, the smallest value of such global default PriorityClasses will be used as the default priority. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata preemptionPolicy string preemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. Possible enum values: - "Never" means that pod never preempts other pods with lower priority. - "PreemptLowerPriority" means that pod can preempt other pods with lower priority. value integer value represents the integer value of this priority class. This is the actual priority that pods receive when they have the name of this class in their pod spec. 6.2. API endpoints The following API endpoints are available: /apis/scheduling.k8s.io/v1/priorityclasses DELETE : delete collection of PriorityClass GET : list or watch objects of kind PriorityClass POST : create a PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses GET : watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. /apis/scheduling.k8s.io/v1/priorityclasses/{name} DELETE : delete a PriorityClass GET : read the specified PriorityClass PATCH : partially update the specified PriorityClass PUT : replace the specified PriorityClass /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} GET : watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/scheduling.k8s.io/v1/priorityclasses Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of PriorityClass Table 6.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 6.3. Body parameters Parameter Type Description body DeleteOptions schema Table 6.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind PriorityClass Table 6.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.6. HTTP responses HTTP code Reponse body 200 - OK PriorityClassList schema 401 - Unauthorized Empty HTTP method POST Description create a PriorityClass Table 6.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.8. Body parameters Parameter Type Description body PriorityClass schema Table 6.9. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 202 - Accepted PriorityClass schema 401 - Unauthorized Empty 6.2.2. /apis/scheduling.k8s.io/v1/watch/priorityclasses Table 6.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of PriorityClass. deprecated: use the 'watch' parameter with a list operation instead. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/scheduling.k8s.io/v1/priorityclasses/{name} Table 6.12. Global path parameters Parameter Type Description name string name of the PriorityClass Table 6.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a PriorityClass Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.15. Body parameters Parameter Type Description body DeleteOptions schema Table 6.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified PriorityClass Table 6.17. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified PriorityClass Table 6.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.19. Body parameters Parameter Type Description body Patch schema Table 6.20. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified PriorityClass Table 6.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.22. Body parameters Parameter Type Description body PriorityClass schema Table 6.23. HTTP responses HTTP code Reponse body 200 - OK PriorityClass schema 201 - Created PriorityClass schema 401 - Unauthorized Empty 6.2.4. /apis/scheduling.k8s.io/v1/watch/priorityclasses/{name} Table 6.24. Global path parameters Parameter Type Description name string name of the PriorityClass Table 6.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind PriorityClass. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/schedule_and_quota_apis/priorityclass-scheduling-k8s-io-v1
9.15. XSD Schema File
9.15. XSD Schema File You can import XML Schema file (XSD) files using the steps below. In Model Explorer, right-click and then click Import... or click the File > Import... action in the toolbar or select a project, folder or model in the tree and click Import... Select the import option Teiid Designer > XML Schemas and click > . Select either Import XSD Schemas from file system or Import XSD Schemas via URL and click > . If importing from file system, the Import XSD Files dialog is displayed. Click the Browse button to find the directory that contains the XSD file(s) you wish to import. To select all of the XSD files in the directory, click the checkbox to the folder in the left panel. To select individual XSD files, click the check boxes to the files you want in the right panel. Figure 9.60. Select XSD From File System If importing from URL, select the Import XML Schemas via URL option and click OK to display the final Add XML Schema URLs wizard page. Figure 9.61. Add XML Schema URLs Dialog Click the Add XML Schema URL button . Enter a valid schema URL. Click OK . Schema will be validated and resulting entry added to the list of XML Schema URLs. Figure 9.62. Add XSD Schema URLs The schema URL is now displayed in the XML Schema URLs list. Figure 9.63. Add XSD Schema URLs Click Finish . Note XSD files may have dependent files. This importer will determine these dependencies and import these as well if Add Dependent Schema Files is selected.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_1_teiid_designer/xsd_schema_file
Chapter 4. ConsoleLink [console.openshift.io/v1]
Chapter 4. ConsoleLink [console.openshift.io/v1] Description ConsoleLink is an extension for customizing OpenShift web console links. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleLinkSpec is the desired console link configuration. 4.1.1. .spec Description ConsoleLinkSpec is the desired console link configuration. Type object Required href location text Property Type Description applicationMenu object applicationMenu holds information about section and icon used for the link in the application menu, and it is applicable only when location is set to ApplicationMenu. href string href is the absolute secure URL for the link (must use https) location string location determines which location in the console the link will be appended to (ApplicationMenu, HelpMenu, UserMenu, NamespaceDashboard). namespaceDashboard object namespaceDashboard holds information about namespaces in which the dashboard link should appear, and it is applicable only when location is set to NamespaceDashboard. If not specified, the link will appear in all namespaces. text string text is the display text for the link 4.1.2. .spec.applicationMenu Description applicationMenu holds information about section and icon used for the link in the application menu, and it is applicable only when location is set to ApplicationMenu. Type object Required section Property Type Description imageURL string imageUrl is the URL for the icon used in front of the link in the application menu. The URL must be an HTTPS URL or a Data URI. The image should be square and will be shown at 24x24 pixels. section string section is the section of the application menu in which the link should appear. This can be any text that will appear as a subheading in the application menu dropdown. A new section will be created if the text does not match text of an existing section. 4.1.3. .spec.namespaceDashboard Description namespaceDashboard holds information about namespaces in which the dashboard link should appear, and it is applicable only when location is set to NamespaceDashboard. If not specified, the link will appear in all namespaces. Type object Property Type Description namespaceSelector object namespaceSelector is used to select the Namespaces that should contain dashboard link by label. If the namespace labels match, dashboard link will be shown for the namespaces. namespaces array (string) namespaces is an array of namespace names in which the dashboard link should appear. 4.1.4. .spec.namespaceDashboard.namespaceSelector Description namespaceSelector is used to select the Namespaces that should contain dashboard link by label. If the namespace labels match, dashboard link will be shown for the namespaces. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 4.1.5. .spec.namespaceDashboard.namespaceSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 4.1.6. .spec.namespaceDashboard.namespaceSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 4.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolelinks DELETE : delete collection of ConsoleLink GET : list objects of kind ConsoleLink POST : create a ConsoleLink /apis/console.openshift.io/v1/consolelinks/{name} DELETE : delete a ConsoleLink GET : read the specified ConsoleLink PATCH : partially update the specified ConsoleLink PUT : replace the specified ConsoleLink /apis/console.openshift.io/v1/consolelinks/{name}/status GET : read status of the specified ConsoleLink PATCH : partially update status of the specified ConsoleLink PUT : replace status of the specified ConsoleLink 4.2.1. /apis/console.openshift.io/v1/consolelinks HTTP method DELETE Description delete collection of ConsoleLink Table 4.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleLink Table 4.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleLinkList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleLink Table 4.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.4. Body parameters Parameter Type Description body ConsoleLink schema Table 4.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 202 - Accepted ConsoleLink schema 401 - Unauthorized Empty 4.2.2. /apis/console.openshift.io/v1/consolelinks/{name} Table 4.6. Global path parameters Parameter Type Description name string name of the ConsoleLink HTTP method DELETE Description delete a ConsoleLink Table 4.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 4.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleLink Table 4.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleLink Table 4.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleLink Table 4.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.13. Body parameters Parameter Type Description body ConsoleLink schema Table 4.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 401 - Unauthorized Empty 4.2.3. /apis/console.openshift.io/v1/consolelinks/{name}/status Table 4.15. Global path parameters Parameter Type Description name string name of the ConsoleLink HTTP method GET Description read status of the specified ConsoleLink Table 4.16. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleLink Table 4.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleLink Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body ConsoleLink schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleLink schema 201 - Created ConsoleLink schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/console_apis/consolelink-console-openshift-io-v1
Chapter 20. Setting access control on the Directory Manager account
Chapter 20. Setting access control on the Directory Manager account Having an unconstrained administrative user makes sense from a maintenance perspective. The Directory Manager requires a high level of access in order to perform maintenance tasks and to respond to incidents. However, because of the power of the Directory Manager user, a certain level of access control can be advisable to prevent damages from attacks being performed as the administrative user. 20.1. About access controls on the Directory Manager account Directory Server applies regular access control instructions only to the directory tree. The privileges of the Directory Manager account are hard-coded, and you cannot use this account in bind rules. To limit access to the Directory Manager account, use the RootDN Access Control plug-in. This plug-in's features are different from standard access control instructions (ACI). For example, certain information, such as the target (the Directory Manager entry) and the allowed permissions (all of them) is implied. The purpose of the RootDN Access Control plug-in is to provide a level of security by limiting who can log in as Directory Manager based on their location or time, not to restrict what this user can do. For this reason, the settings of the plug-in only support: Time-based access controls, to allow or deny access on certain days and specific time ranges IP address rules, to allow or deny access from defined IP addresses, subnets, and domains Host access rules, to allow or deny access from specific hosts, domains, and subdomains There is only one access control rule you can set for the Directory Manager. It is in the plug-in entry, and it applies to the entire directory. Same as in regular ACIs, deny rules have a higher priority than allow rules. Important Ensure that the Directory Manager account has an appropriate level of access. This administrative user might need to perform maintenance operations in off-hours or to respond to failures. In this case, setting a too restrictive time or day rule can prevent the Directory Manager user from adequately manage the directory. 20.2. Configuring the RootDN access control plug-in using the command line By default, the RootDN Access Control plug-in is disabled. To limit permissions of the Directory Manager account, enable and configure the plug-in. Procedure Enable the RootDN Access Control plug-in: # dsconf -D " cn=Directory Manager " ldap://server.example.com plugin root-dn enable Set the bind rules. For example, to allow the Directory Manager account to only log in between 6am and 9pm from the host with IP address 192.0.2.1 , enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com plugin root-dn set --open-time= 0600 --close-time= 2100 --allow-ip=" 192.0.2.1 " For the full list of parameters you can set and their descriptions, enter: # dsconf -D " cn=Directory Manager " ldap://server.example.com plugin root-dn set --help Restart the instance: # dsctl instance_name restart Verification Perform a query as cn=Directory Manager from a host that is not allowed or outside of the allowed time range: [[email protected]]USD ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -x -b " dc=example,dc=com " Enter LDAP Password: ldap_bind: Server is unwilling to perform (53) additional info: RootDN access control violation If Directory Server denies access, the plug-in works as expected. 20.3. Configuring the RootDN access control plug-in using the web console By default, the RootDN Access Control plug-in is disabled. To limit permissions of the Directory Manager account, enable and configure the plug-in. Prerequisites You are logged in to the instance in the web console. Procedure Navigate to Plugins RootDN Access Control . Enable the plug-in. Fill the fields according to your requirements. Click Save . Click Actions in the top right corner, and select Restart Instance . Verification Perform a query as cn=Directory Manager from a host that is not allowed or outside of the allowed time range: [[email protected]]USD ldapsearch -D " cn=Directory Manager " -W -H ldap://server.example.com -x -b " dc=example,dc=com " Enter LDAP Password: ldap_bind: Server is unwilling to perform (53) additional info: RootDN access control violation If Directory Server denies access, the plug-in works as expected.
[ "dsconf -D \" cn=Directory Manager \" ldap://server.example.com plugin root-dn enable", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com plugin root-dn set --open-time= 0600 --close-time= 2100 --allow-ip=\" 192.0.2.1 \"", "dsconf -D \" cn=Directory Manager \" ldap://server.example.com plugin root-dn set --help", "dsctl instance_name restart", "[[email protected]]USD ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -b \" dc=example,dc=com \" Enter LDAP Password: ldap_bind: Server is unwilling to perform (53) additional info: RootDN access control violation", "[[email protected]]USD ldapsearch -D \" cn=Directory Manager \" -W -H ldap://server.example.com -x -b \" dc=example,dc=com \" Enter LDAP Password: ldap_bind: Server is unwilling to perform (53) additional info: RootDN access control violation" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/securing_red_hat_directory_server/assembly_setting-access-control-on-the-directory-manager-account_securing-rhds
4.6. iSCSI and DM Multipath overrides
4.6. iSCSI and DM Multipath overrides The recovery_tmo sysfs option controls the timeout for a particular iSCSI device. The following options globally override recovery_tmo values: The replacement_timeout configuration option globally overrides the recovery_tmo value for all iSCSI devices. For all iSCSI devices that are managed by DM Multipath, the fast_io_fail_tmo option in DM Multipath globally overrides the recovery_tmo value. The fast_io_fail_tmo option in DM Multipath also overrides the fast_io_fail_tmo option in Fibre Channel devices. The DM Multipath fast_io_fail_tmo option takes precedence over replacement_timeout . Red Hat does not recommend using replacement_timeout to override recovery_tmo in devices managed by DM Multipath because DM Multipath always resets recovery_tmo when the multipathd service reloads.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/iscsi-and-dm-multipath-overrides
F.3. About JBoss Cache
F.3. About JBoss Cache Red Hat JBoss Cache is a tree-structured, clustered, transactional cache that can also be used in a standalone, non-clustered environment. It caches frequently accessed data in-memory to prevent data retrieval or calculation bottlenecks that occur while enterprise features such as Java Transactional API (JTA) compatibility, eviction and persistence are provided. JBoss Cache is the predecessor to Infinispan and Red Hat JBoss Data Grid. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/jboss_cache
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback: For simple comments on specific passages: Make sure you are viewing the documentation in the HTML format. In addition, ensure you see the Feedback button in the upper right corner of the document. Use your mouse cursor to highlight the part of text that you want to comment on. Click the Add Feedback pop-up that appears below the highlighted text. Follow the displayed instructions. For submitting more complex feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/deploying_openshift_data_foundation_using_microsoft_azure/providing-feedback-on-red-hat-documentation_azure
4.323. tftp
4.323. tftp 4.323.1. RHBA-2012:1436 - ftp bug fix update Updated ftp packages that fix three bugs are now available for Red Hat Enterprise Linux 6. The ftp packages provide the standard UNIX command line File Transfer Protocol (FTP) client. FTP is a widely used protocol for transferring files over the Internet, and for archiving files. Bug Fixes BZ# 871059 Previously, the command line width in the ftp client was limited to 200 characters. With this update, the maximum possible length of the FTP command line is extended to 4296 characters. BZ# 871071 Prior to this update, "append", "put", and "send" commands were causing system memory to leak. The memory holding the ftp command was not freed appropriately. With this update, the underlying source code has been improved to correctly free the system resources and the memory leaks are no longer present. BZ# 871546 Previously, if a macro longer than 200 characters was defined and then used after a connection, the ftp client crashed due to a buffer overflow. With this update, the underlying source code was updated and the buffer that holds memory for the macro name was extended. It now matches the length of the command line limit mentioned above. As a result, the ftp client no longer crashes when a macro with a long name is executed. All users of ftp are advised to upgrade to these updated packages, which fix these bugs. 4.323.2. RHBA-2011:1133 - tftp bug fix update Updated tftp packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The Trivial File Transfer Protocol (TFTP) is normally used only for booting diskless workstations. The tftp package provides the user interface for TFTP, which allows users to transfer files to and from a remote machine. The tftp-server package provides the server for TFTP which allows users to transfer files to and from a remote machine. Bug Fixes BZ# 655830 When the small files were transferred and the "-v" option was enabled, the tftp client printed incorrect statistics about the transfer. This update fixes printing of the transfer statistics. BZ# 714240 The tftpd daemon did not correctly handle the utimeout option value. If a client specified a utimeout value within the permitted range, it caused the tftpd process to crash. This crash only affected the current tftp request. All users of tftp and tftp-server should upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/tftp
Glossary
Glossary 1. A Access Layer The access layer is the interface through which applications submit queries (relational, XML, XQuery or procedural) to the virtual database via JDBC, ODBC or Web services. Admin API The Admin API package, org.teiid.adminapi , provides methods that allow developers to connect to and configure a Red Hat JBoss Data Virtualization instance at runtime from within other applications. Administration Tools The administration tools are a suite of tools made available for administrators to configure and monitor Red Hat JBoss Data Virtualization. AdminShell The AdminShell provides a script-based programming environment enabling users to access, monitor and control Red Hat JBoss Data Virtualization. AdminShell GUI The AdminShell GUI allows users to administer Red Hat JBoss Data Virtualization through the development and running of scripts in a graphical environment. It lets you write scripts using a GUI text editor featuring syntax highlighting. Apache CXF Apache CXF is an open source framework for developing service-oriented architectures (SOAs). CXF lets users build and develop services using frontend programming APIs, like JAX-WS and JAX-RS. These services can use a variety of protocols such as SOAP, XML/HTTP, RESTful HTTP, or CORBA and work over a variety of transports such as HTTP, JMS or JBI. For more information about CXF, refer to http://cxf.apache.org/docs/ 2. B 3. C Cluster A cluster is a group of loosely-connected computers that work together in such a way that they can be viewed as a single system. The components (or nodes) of a cluster are connected to each other through fast local area networks, each node running its own operating system. Clustering middleware orchestrates collaboration between the nodes, allowing users and applications to treat the cluster as a single processor. Clusters are effective for scalable enterprise applications, since performance is improved by adding more nodes as required. Furthermore, if at anytime one node fails, others can take on its load. Checksum Validation Checksum validation is used to ensure a downloaded file has not been corrupted. Checksum validation employs algorithms that compute a fixed-size datum (or checksum) from an arbitrary block of digital data. If two parties compute a checksum of a particular file using the same algorithm, the results will be identical. Therefore, when computing the checksum of a downloaded file using the same algorithm as the supplier, if the checksums match, the integrity of the file is confirmed. If there is a discrepancy, the file has been corrupted in the download process. Connector Development Kit The Connector Development Kit is a Java API that allows users to customize the connector architecture (translators and resource adapters) for specific integration scenarios. Connector Framework Red Hat JBoss Data Virtualization includes a set of translators and resource adapters that enable virtual databases to access their dependent physical data sources. In other words, they provide transparent connectivity between the query engine and the physical data sources. Collectively, this set of translators and adapters is known as the connector framework. You can develop your own custom translators and resource adapters for data sources that are not directly supported by Red Hat JBoss Data Virtualization. 4. D Dashboard The Dashboard builder allows you to connect to virtual databases through the DV JDBC driver to visualize the data for testing and Business Analytics. Data Federation Data Federation refers to the ability of Red Hat JBoss Data Virtualization to integrate multiple data sources so that a single query can return results from one or more data sources. Data Provider Data providers are entities that are configured to connect to a data source (a CSV file or database), collect the required data, and assign them the data type. You can think about them as database queries. The collected data can be then visualized in indicators on pages or exported in a variety of formats such as XLS or CSV. Data Role Data roles are sets of permissions used to control users' access to data. They include read-only and read-write access roles. Data Services Builder The Data Services Builder allows you to build Data Services through the browser. Data Source A data source is a repository for data. Query languages enable users to retrieve and manipulate data stored in these respositories. Database Tool The database tool is used to configure the database used by Red Hat JBoss Data Virtualization. The database in Red Hat JBoss Data Virtualization is required by three items: ModeShape, the Dashboard Builder and the command/audit logging functionality. If you are using Red Hat JBoss Data Virtualization in conjunction with Red Hat JBoss Fuse Service Works, BPEL, and SwitchYard/jBPM integration also make use of this functionality. Design Tools The design tools are available to assist users in setting up Red Hat JBoss Data Virtualization for their desired data integration solution. Distributed Caching Distributed caching extends the traditional concept of caching on single machines to allow caching across multiple servers. Using distributed cache, applications can treat the cache as a single entity, and cache can grow in size and transactional capacity by adding more servers. 5. E Execution Environment The Execution Environment is used to actualize the abstract structures from the underlying data, and expose them through standard APIs. The Data Virtualization query engine is a required part of the execution environment, to optimally federate data from multiple disparate sources. 6. F Failover Failover is automatic switching to a redundant or standby computer server, system, hardware component or network upon the failure or abnormal termination of the previously active application, server, system, hardware component or network. 7. G Governance Governance uses the hierarchical database for storing metadata related to Red Hat JBoss Data Virtualization. 8. H 9. I Interactive AdminShell The interactive AdminShell provides a command-line interface in which users can issue ad hoc script-based commands for simple administration of JBoss Data Virtualization. 10. J JBoss Operations Network JBoss Operations Network (JON) gives administrators a single point of access to view their systems. JON provides a means to develop and monitor a system's inventory. Every managed resource - from platforms to applications to services - is contained and organized in the inventory, no matter how complex the IT environment may be. JBoss Operations Network centralizes all of its operations in an installed server. The JON server communicates with locally installed JON agents. The agents interact directly with the platform and services to carry out local tasks such as monitoring. The types of resources that can be managed by JON and the operations that can be carried out are determined by the server and agent plug-ins which are loaded in JON. The relationships between servers, agents, plug-ins, and resources are what define JBoss Operations Network. 11. K 12. L Load Balancing Load balancing is a computer networking methodology to distribute workload across multiple computers or a computer cluster, network links, central processing units, disk drives, or other resources, to achieve optimal resource utilization, maximize throughput, minimize response time, and avoid overload. Using multiple components with load balancing, instead of a single component, also increases reliability through redundancy. 13. M Management Command Line Interface The Management Command Line Interface (CLI) is a command line administration tool for JBoss EAP 6. Use the Management CLI to start and stop servers, deploy and undeploy applications, configure system settings, and perform other administrative tasks. Operations can be performed in batch mode, allowing multiple tasks to be run as a group. Management Console The Management Console is a web-based administration tool for JBoss EAP 6. Use the Management Console to start and stop servers, deploy and undeploy applications, tune system settings, and make persistent modifications to the server configuration. The Management Console also has the ability to perform administrative tasks, with live notifications when any changes require the server instance to be restarted or reloaded. In a Managed Domain, server instances and server groups in the same domain can be centrally managed from the Management Console of the domain controller. Materialized Views JBoss Data Virtualization supports materialized views. Materialized views are just like other views, but their transformations are pre-computed and stored just like a regular table. When queries are issued against the views through the JBoss Data Virtualization Server, the cached results are used. This saves the cost of accessing all the underlying data sources and re-computing the view transformations each time a query is executed. Materialized views are appropriate when the underlying data does not change rapidly, or when it is acceptable to retrieve data that is "stale" within some period of time, or when it is preferred for end-user queries to access staged data rather than placing additional query load on operational sources. Maven Apache Maven is a distributed build automation tool used in Java application development to build and manage software projects. Maven uses configuration XML files called POM (Project Object Model) to define project properties and manage the build process. POM files describe the project's module and component dependencies, build order, and targets for the resulting project packaging and output. This ensures that projects are built in a correct and uniform manner. Maven uses repositories to store Java libraries, plug-ins, and other build artifacts. Repositories can be either local or remote. A local repository is a download of artifacts from a remote repository cached on a local machine. A remote repository is any other repository accessed using common protocols, such as http:// when located on an HTTP server or file:// when located on a file server. Modeling Environment The Modeling Environment is used to define abstraction layers. 14. N Non-interactive AdminShell The non-interactive AdminShell allows you to administer Red Hat JBoss Data Virtualization by executing previously developed scripts from the command-line. This mode is especially useful to automate testing and to perform repeated configuration/migration changes to a JBoss Data Virtualization instance. 15. O Open Database Connectivity Open Database Connectivity (ODBC) is a standard C programming language middleware API for accessing Database Management Systems (DBMSes). The designers of ODBC aimed to make it independent of database systems and operating systems; an application written using ODBC can be ported to other platforms, both on the client and server side, with few changes to the data access code. 16. P Pages Pages are units that live in a workspace and provide space (dashboard) for panels. By default, you can display a page by selecting it in the Page dropdown menu in the top panel. Every page is divided in two main parts: the lateral menu and the central part of the page. The parts are divided further (the exact division is visible when placing a new panel on a page). Note that the lateral menu allows you to insert panels only below each other, while in the central part of the page you can insert panels below each other as well as tab them. A page also has a customizable header part and logo area. 17. Q Query Engine When applications submit queries to a Virtual Database via the access layer, the query engine produces an optimized query plan to provide efficient access to the required physical data sources as determined by the SQL criteria and the mappings between source and view models in the VDB. This query plan dictates processing order to ensure physical data sources are accessed in the most efficient manner. 18. R Resource Adapters A resource adapter is a deployable Java EE component that provides communication between a Java EE application and an Enterprise Information System (EIS) using the Java Connector Architecture (JCA) specification. A resource adapter is often provided by EIS vendors to allow easy integration of their products with Java EE applications. An Enterprise Information System can be any other software system within an organization. Examples include Enterprise Resource Planning (ERP) systems, database systems, e-mail servers and proprietary messaging systems. A resource adapter is packaged in a Resource Adapter Archive (RAR) file which can be deployed to JBoss EAP 6. A RAR file may also be included in an Enterprise Archive (EAR) deployment. 19. S Server The server is positioned between business applications (consumers) and one or more data sources. An enterprise ready, scalable, manageable, runtime for the Query Engine that runs inside Red Hat JBoss EAP that provides additional security, fault-tolerance and administrative features. Service A Red Hat JBoss Data Virtualization service is positioned between business applications and one or more data sources, and co-ordinates the integration of those data sources for access by the business applications at runtime. Source Model Source Models represent the structure and characteristics of physical data sources and the source model must be associated with a translator and a resource adapter. 20. T Teiid Designer Teiid Designer is a plug-in for Red Hat JBoss Developer Studio, providing a graphical user interface to design and test virtual databases. The Teiid Designer produces Relational, XML and Web Service views. It is designed to resolve semantic differences, create virtual data structures at a physical or logical level and use declarative interfaces to integrate, aggregate, and transform the data from source to a target format for application compatability. Translator In Red Hat JBoss Data Virtualization, a translator provides an abstraction layer between the query engine and the physical data source. This layer converts query commands into source specific commands and executes them using a resource adapter. The translator also converts the result data that comes from the physical source into the form that the query engine requires. 21. U 22. V View Model View Models represent the structure and characteristics you want to expose to your consumers. These view models are used to define a layer of abstraction above the physical layer. This enables information to be presented to consumers as a business model rather than as a representation of how it is physically stored. The views are defined using transformations between models. The business views can be in a variety of forms: relational, XML or Web Services. Virtual Database A virtual database (VDB) is a container for components that integrate data from multiple disparate data sources, allowing applications to access and query the data as if it is in a single database and, therefore, using a single uniform API. A VDB is composed of various data models and configuration information that describes which data sources are to be integrated and how. In particular, source models are used to represent the structure and characteristics of the physical data sources, and view models represent the structure and characteristics of the integrated data exposed to applications. 23. W Workspace A workspace is a container for pages with panels or indicators. Every workspace uses a particular skin and envelope, which define the workspace's graphical properties. By default, the Showcase workspace is available. 24. X 25. Y 26. Z
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/glossary_guide/glossary
Chapter 6. Working with nodes
Chapter 6. Working with nodes 6.1. Viewing and listing the nodes in your OpenShift Container Platform cluster You can list all the nodes in your cluster to obtain information such as status, age, memory usage, and details about the nodes. When you perform node management operations, the CLI interacts with node objects that are representations of actual node hosts. The master uses the information from node objects to validate nodes with health checks. 6.1.1. About listing all the nodes in a cluster You can get detailed information on the nodes in the cluster. The following command lists all nodes: USD oc get nodes The following example is a cluster with healthy nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.29.4 node1.example.com Ready worker 7h v1.29.4 node2.example.com Ready worker 7h v1.29.4 The following example is a cluster with one unhealthy node: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.29.4 node1.example.com NotReady,SchedulingDisabled worker 7h v1.29.4 node2.example.com Ready worker 7h v1.29.4 The conditions that trigger a NotReady status are shown later in this section. The -o wide option provides additional information on nodes. USD oc get nodes -o wide Example output NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.29.4 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.29.4-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.29.4 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.29.4-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.29.4 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.29.4-30.rhaos4.10.gitf2f339d.el8-dev The following command lists information about a single node: USD oc get node <node> For example: USD oc get node node1.example.com Example output NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.29.4 The following command provides more detailed information about a specific node, including the reason for the current condition: USD oc describe node <node> For example: USD oc describe node node1.example.com Note The following example contains some values that are specific to OpenShift Container Platform on AWS. Example output Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.29.4-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.29.4 Kube-Proxy Version: v1.29.4 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #... 1 The name of the node. 2 The role of the node, either master or worker . 3 The labels applied to the node. 4 The annotations applied to the node. 5 The taints applied to the node. 6 The node conditions and status. The conditions stanza lists the Ready , PIDPressure , MemoryPressure , DiskPressure and OutOfDisk status. These condition are described later in this section. 7 The IP address and hostname of the node. 8 The pod resources and allocatable resources. 9 Information about the node host. 10 The pods on the node. 11 The events reported by the node. Note The control plane label is not automatically added to newly created or updated master nodes. If you want to use the control plane label for your nodes, you can manually configure the label. For more information, see Understanding how to update labels on nodes in the Additional resources section. Among the information shown for nodes, the following node conditions appear in the output of the commands shown in this section: Table 6.1. Node Conditions Condition Description Ready If true , the node is healthy and ready to accept pods. If false , the node is not healthy and is not accepting pods. If unknown , the node controller has not received a heartbeat from the node for the node-monitor-grace-period (the default is 40 seconds). DiskPressure If true , the disk capacity is low. MemoryPressure If true , the node memory is low. PIDPressure If true , there are too many processes on the node. OutOfDisk If true , the node has insufficient free space on the node for adding new pods. NetworkUnavailable If true , the network for the node is not correctly configured. NotReady If true , one of the underlying components, such as the container runtime or network, is experiencing issues or is not yet configured. SchedulingDisabled Pods cannot be scheduled for placement on the node. Additional resources Understanding how to update labels on nodes 6.1.2. Listing pods on a node in your cluster You can list all the pods on a specific node. Procedure To list all or selected pods on selected nodes: USD oc get pod --selector=<nodeSelector> USD oc get pod --selector=kubernetes.io/os Or: USD oc get pod -l=<nodeSelector> USD oc get pod -l kubernetes.io/os=linux To list all pods on a specific node, including terminated pods: USD oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename> 6.1.3. Viewing memory and CPU usage statistics on your nodes You can display usage statistics about nodes, which provide the runtime environments for containers. These usage statistics include CPU, memory, and storage consumption. Prerequisites You must have cluster-reader permission to view the usage statistics. Metrics must be installed to view the usage statistics. Procedure To view the usage statistics: USD oc adm top nodes Example output NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72% To view the usage statistics for nodes with labels: USD oc adm top node --selector='' You must choose the selector (label query) to filter on. Supports = , == , and != . 6.2. Working with nodes As an administrator, you can perform several tasks to make your clusters more efficient. 6.2.1. Understanding how to evacuate pods on nodes Evacuating pods allows you to migrate all or selected pods from a given node or nodes. You can only evacuate pods backed by a replication controller. The replication controller creates new pods on other nodes and removes the existing pods from the specified node(s). Bare pods, meaning those not backed by a replication controller, are unaffected by default. You can evacuate a subset of pods by specifying a pod-selector. Pod selectors are based on labels, so all the pods with the specified label will be evacuated. Procedure Mark the nodes unschedulable before performing the pod evacuation. Mark the node as unschedulable: USD oc adm cordon <node1> Example output node/<node1> cordoned Check that the node status is Ready,SchedulingDisabled : USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.29.4 Evacuate the pods using one of the following methods: Evacuate all or selected pods on one or more nodes: USD oc adm drain <node1> <node2> [--pod-selector=<pod_selector>] Force the deletion of bare pods using the --force option. When set to true , deletion continues even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set: USD oc adm drain <node1> <node2> --force=true Set a period of time in seconds for each pod to terminate gracefully, use --grace-period . If negative, the default value specified in the pod will be used: USD oc adm drain <node1> <node2> --grace-period=-1 Ignore pods managed by daemon sets using the --ignore-daemonsets flag set to true : USD oc adm drain <node1> <node2> --ignore-daemonsets=true Set the length of time to wait before giving up using the --timeout flag. A value of 0 sets an infinite length of time: USD oc adm drain <node1> <node2> --timeout=5s Delete pods even if there are pods using emptyDir volumes by setting the --delete-emptydir-data flag to true . Local data is deleted when the node is drained: USD oc adm drain <node1> <node2> --delete-emptydir-data=true List objects that will be migrated without actually performing the evacuation, using the --dry-run option set to true : USD oc adm drain <node1> <node2> --dry-run=true Instead of specifying specific node names (for example, <node1> <node2> ), you can use the --selector=<node_selector> option to evacuate pods on selected nodes. Mark the node as schedulable when done. USD oc adm uncordon <node1> 6.2.2. Understanding how to update labels on nodes You can update any label on a node. Node labels are not persisted after a node is deleted even if the node is backed up by a Machine. Note Any change to a MachineSet object is not applied to existing machines owned by the compute machine set. For example, labels edited or added to an existing MachineSet object are not propagated to existing machines and nodes associated with the compute machine set. The following command adds or updates labels on a node: USD oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n> For example: USD oc label nodes webconsole-7f7f6 unhealthy=true Tip You can alternatively apply the following YAML to apply the label: kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #... The following command updates all pods in the namespace: USD oc label pods --all <key_1>=<value_1> For example: USD oc label pods --all status=unhealthy 6.2.3. Understanding how to mark nodes as unschedulable or schedulable By default, healthy nodes with a Ready status are marked as schedulable, which means that you can place new pods on the node. Manually marking a node as unschedulable blocks any new pods from being scheduled on the node. Existing pods on the node are not affected. The following command marks a node or nodes as unschedulable: Example output USD oc adm cordon <node> For example: USD oc adm cordon node1.example.com Example output node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled The following command marks a currently unschedulable node or nodes as schedulable: USD oc adm uncordon <node1> Alternatively, instead of specifying specific node names (for example, <node> ), you can use the --selector=<node_selector> option to mark selected nodes as schedulable or unschedulable. 6.2.4. Handling errors in single-node OpenShift clusters when the node reboots without draining application pods In single-node OpenShift clusters and in OpenShift Container Platform clusters in general, a situation can arise where a node reboot occurs without first draining the node. This can occur where an application pod requesting devices fails with the UnexpectedAdmissionError error. Deployment , ReplicaSet , or DaemonSet errors are reported because the application pods that require those devices start before the pod serving those devices. You cannot control the order of pod restarts. While this behavior is to be expected, it can cause a pod to remain on the cluster even though it has failed to deploy successfully. The pod continues to report UnexpectedAdmissionError . This issue is mitigated by the fact that application pods are typically included in a Deployment , ReplicaSet , or DaemonSet . If a pod is in this error state, it is of little concern because another instance should be running. Belonging to a Deployment , ReplicaSet , or DaemonSet guarantees the successful creation and execution of subsequent pods and ensures the successful deployment of the application. There is ongoing work upstream to ensure that such pods are gracefully terminated. Until that work is resolved, run the following command for a single-node OpenShift cluster to remove the failed pods: USD oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE> Note The option to drain the node is unavailable for single-node OpenShift clusters. Additional resources Understanding how to evacuate pods on nodes 6.2.5. Deleting nodes 6.2.5.1. Deleting nodes from a cluster To delete a node from the OpenShift Container Platform cluster, scale down the appropriate MachineSet object. Important When a cluster is integrated with a cloud provider, you must delete the corresponding machine to delete a node. Do not try to use the oc delete node command for this task. When you delete a node by using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods that are not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Note If you are running cluster on bare metal, you cannot delete a node by editing MachineSet objects. Compute machine sets are only available when a cluster is integrated with a cloud provider. Instead you must unschedule and drain the node before manually deleting it. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets -n openshift-machine-api The compute machine sets are listed in the form of <cluster-id>-worker-<aws-region-az> . Scale down the compute machine set by using one of the following methods: Specify the number of replicas to scale down to by running the following command: USD oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api Edit the compute machine set custom resource by running the following command: USD oc edit machineset <machine-set-name> -n openshift-machine-api Example output apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # ... name: <machine-set-name> namespace: openshift-machine-api # ... spec: replicas: 2 1 # ... 1 Specify the number of replicas to scale down to. Additional resources Manually scaling a compute machine set 6.2.5.2. Deleting nodes from a bare metal cluster When you delete a node using the CLI, the node object is deleted in Kubernetes, but the pods that exist on the node are not deleted. Any bare pods not backed by a replication controller become inaccessible to OpenShift Container Platform. Pods backed by replication controllers are rescheduled to other available nodes. You must delete local manifest pods. Procedure Delete a node from an OpenShift Container Platform cluster running on bare metal by completing the following steps: Mark the node as unschedulable: USD oc adm cordon <node_name> Drain all pods on the node: USD oc adm drain <node_name> --force=true This step might fail if the node is offline or unresponsive. Even if the node does not respond, it might still be running a workload that writes to shared storage. To avoid data corruption, power down the physical hardware before you proceed. Delete the node from the cluster: USD oc delete node <node_name> Although the node object is now deleted from the cluster, it can still rejoin the cluster after reboot or if the kubelet service is restarted. To permanently delete the node and all its data, you must decommission the node . If you powered down the physical hardware, turn it back on so that the node can rejoin the cluster. 6.3. Managing nodes OpenShift Container Platform uses a KubeletConfig custom resource (CR) to manage the configuration of nodes. By creating an instance of a KubeletConfig object, a managed machine config is created to override setting on the node. Note Logging in to remote machines for the purpose of changing their configuration is not supported. 6.3.1. Modifying nodes To make configuration changes to a cluster, or machine pool, you must create a custom resource definition (CRD), or kubeletConfig object. OpenShift Container Platform uses the Machine Config Controller to watch for changes introduced through the CRD to apply the changes to the cluster. Note Because the fields in a kubeletConfig object are passed directly to the kubelet from upstream Kubernetes, the validation of those fields is handled directly by the kubelet itself. Please refer to the relevant Kubernetes documentation for the valid values for these fields. Invalid values in the kubeletConfig object can render cluster nodes unusable. Procedure Obtain the label associated with the static CRD, Machine Config Pool, for the type of node you want to configure. Perform one of the following steps: Check current labels of the desired machine config pool. For example: USD oc get machineconfigpool --show-labels Example output NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False Add a custom label to the desired machine config pool. For example: USD oc label machineconfigpool worker custom-kubelet=enabled Create a kubeletconfig custom resource (CR) for your configuration change. For example: Sample configuration for a custom-config CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label to apply the configuration change, this is the label you added to the machine config pool. 3 Specify the new value(s) you want to change. Create the CR object. USD oc create -f <file-name> For example: USD oc create -f master-kube-config.yaml Most Kubelet Configuration options can be set by the user. The following options are not allowed to be overwritten: CgroupDriver ClusterDNS ClusterDomain StaticPodPath Note If a single node contains more than 50 images, pod scheduling might be imbalanced across nodes. This is because the list of images on a node is shortened to 50 by default. You can disable the image limit by editing the KubeletConfig object and setting the value of nodeStatusMaxImages to -1 . 6.3.2. Updating boot images The Machine Config Operator (MCO) uses a boot image to bring up a Red Hat Enterprise Linux CoreOS (RHCOS) node. By default, OpenShift Container Platform does not manage the boot image. This means that the boot image in your cluster is not updated along with your cluster. For example, if your cluster was originally created with OpenShift Container Platform 4.12, the boot image that the cluster uses to create nodes is the same 4.12 version, even if your cluster is at a later version. If the cluster is later upgraded to 4.13 or later, new nodes continue to scale with the same 4.12 image. This process could cause the following issues: Extra time to start up nodes Certificate expiration issues Version skew issues To avoid these issues, you can configure your cluster to update the boot image whenever you update your cluster. By modifying the MachineConfiguration object, you can enable this feature. Currently, the ability to update the boot image is available for only Google Cloud Platform (GCP) clusters and is not supported for Cluster CAPI Operator managed clusters. Important The updating boot image feature is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To view the current boot image used in your cluster, examine a machine set: Example machine set with the boot image reference apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: # ... template: # ... spec: # ... providerSpec: # ... value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1 # ... 1 This boot image is the same as the originally-installed OpenShift Container Platform version, in this example OpenShift Container Platform 4.12, regardless of the current version of the cluster. The way that the boot image is represented in the machine set depends on the platform, as the structure of the providerSpec field differs from platform to platform. If you configure your cluster to update your boot images, the boot image referenced in your machine sets matches the current version of the cluster. Prerequisites You have enabled the TechPreviewNoUpgrade feature set by using the feature gates. For more information, see "Enabling features using feature gates" in the "Additional resources" section. Procedure Edit the MachineConfiguration object, named cluster , to enable the updating of boot images by running the following command: USD oc edit MachineConfiguration cluster Optional: Configure the boot image update feature for all the machine sets: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2 1 Activates the boot image update feature. 2 Specifies that all the machine sets in the cluster are to be updated. Optional: Configure the boot image update feature for specific machine sets: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: "true" 2 1 Activates the boot image update feature. 2 Specifies that any machine set with this label is to be updated. Tip If an appropriate label is not present on the machine set, add a key/value pair by running a command similar to following: Verification Get the boot image version by running the following command: USD oc get machinesets <machineset_name> -n openshift-machine-api -o yaml Example machine set with the boot image reference apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: "true" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: # ... template: # ... spec: # ... providerSpec: # ... value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1 # ... 1 This boot image is the same as the current OpenShift Container Platform version. Additional resources Enabling features using feature gates 6.3.2.1. Disabling updated boot images To disable the updated boot image feature, edit the MachineConfiguration object to remove the managedBootImages stanza. If you disable this feature after some nodes have been created with the new boot image version, any existing nodes retain their current boot image. Turning off this feature does not rollback the nodes or machine sets to the originally-installed boot image. The machine sets retain the boot image version that was present when the feature was enabled and is not updated again when the cluster is upgraded to a new OpenShift Container Platform version in the future. Procedure Disable updated boot images by editing the MachineConfiguration object: USD oc edit MachineConfiguration cluster Remove the managedBootImages stanza: apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: # ... managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 1 Remove the entire stanza to disable updated boot images. 6.3.3. Configuring control plane nodes as schedulable You can configure control plane nodes to be schedulable, meaning that new pods are allowed for placement on the master nodes. By default, control plane nodes are not schedulable. You can set the masters to be schedulable, but must retain the worker nodes. Note You can deploy OpenShift Container Platform with no worker nodes on a bare metal cluster. In this case, the control plane nodes are marked schedulable by default. You can allow or disallow control plane nodes to be schedulable by configuring the mastersSchedulable field. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become worker nodes. Procedure Edit the schedulers.config.openshift.io resource. USD oc edit schedulers.config.openshift.io cluster Configure the mastersSchedulable field. apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: "2019-09-10T03:04:05Z" generation: 1 name: cluster resourceVersion: "433" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #... 1 Set to true to allow control plane nodes to be schedulable, or false to disallow control plane nodes to be schedulable. Save the file to apply the changes. 6.3.4. Setting SELinux booleans OpenShift Container Platform allows you to enable and disable an SELinux boolean on a Red Hat Enterprise Linux CoreOS (RHCOS) node. The following procedure explains how to modify SELinux booleans on nodes using the Machine Config Operator (MCO). This procedure uses container_manage_cgroup as the example boolean. You can modify this value to whichever boolean you need. Prerequisites You have installed the OpenShift CLI (oc). Procedure Create a new YAML file with a MachineConfig object, displayed in the following example: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #... Create the new MachineConfig object by running the following command: USD oc create -f 99-worker-setsebool.yaml Note Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. 6.3.5. Adding kernel arguments to nodes In some special cases, you might want to add kernel arguments to a set of nodes in your cluster. This should only be done with caution and clear understanding of the implications of the arguments you set. Warning Improper use of kernel arguments can result in your systems becoming unbootable. Examples of kernel arguments you could set include: nosmt : Disables symmetric multithreading (SMT) in the kernel. Multithreading allows multiple logical threads for each CPU. You could consider nosmt in multi-tenant environments to reduce risks from potential cross-thread attacks. By disabling SMT, you essentially choose security over performance. systemd.unified_cgroup_hierarchy : Enables Linux control group version 2 (cgroup v2). cgroup v2 is the version of the kernel control group and offers multiple improvements. Important cgroup v1 is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. enforcing=0 : Configures Security Enhanced Linux (SELinux) to run in permissive mode. In permissive mode, the system acts as if SELinux is enforcing the loaded security policy, including labeling objects and emitting access denial entries in the logs, but it does not actually deny any operations. While not supported for production systems, permissive mode can be helpful for debugging. Warning Disabling SELinux on RHCOS in production is not supported. Once SELinux has been disabled on a node, it must be re-provisioned before re-inclusion in a production cluster. See Kernel.org kernel parameters for a list and descriptions of kernel arguments. In the following procedure, you create a MachineConfig object that identifies: A set of machines to which you want to add the kernel argument. In this case, machines with a worker role. Kernel arguments that are appended to the end of the existing kernel arguments. A label that indicates where in the list of machine configs the change is applied. Prerequisites Have administrative privilege to a working OpenShift Container Platform cluster. Procedure List existing MachineConfig objects for your OpenShift Container Platform cluster to determine how to label your machine config: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Create a MachineConfig object file that identifies the kernel argument (for example, 05-worker-kernelarg-selinuxpermissive.yaml ) apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3 1 Applies the new kernel argument only to worker nodes. 2 Named to identify where it fits among the machine configs (05) and what it does (adds a kernel argument to configure SELinux permissive mode). 3 Identifies the exact kernel argument as enforcing=0 . Create the new machine config: USD oc create -f 05-worker-kernelarg-selinuxpermissive.yaml Check the machine configs to see that the new one was added: USD oc get MachineConfig Example output NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m Check the nodes: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.29.4 ip-10-0-136-243.ec2.internal Ready master 34m v1.29.4 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.29.4 ip-10-0-142-249.ec2.internal Ready master 34m v1.29.4 ip-10-0-153-11.ec2.internal Ready worker 28m v1.29.4 ip-10-0-153-150.ec2.internal Ready master 34m v1.29.4 You can see that scheduling on each worker node is disabled as the change is being applied. Check that the kernel argument worked by going to one of the worker nodes and listing the kernel command line arguments (in /proc/cmdline on the host): USD oc debug node/ip-10-0-141-105.ec2.internal Example output Starting pod/ip-10-0-141-105ec2internal-debug ... To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16... coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit You should see the enforcing=0 argument added to the other kernel arguments. 6.3.6. Enabling swap memory use on nodes Important Enabling swap memory use on nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Note Enabling swap memory is only available for container-native virtualization (CNV) users or use cases. Warning Enabling swap memory can negatively impact workload performance and out-of-resource handling. Do not enable swap memory on control plane nodes. To enable swap memory, create a kubeletconfig custom resource (CR) to set the swapbehavior parameter. You can set limited or unlimited swap memory: Limited: Use the LimitedSwap value to limit how much swap memory workloads can use. Any workloads on the node that are not managed by OpenShift Container Platform can still use swap memory. The LimitedSwap behavior depends on whether the node is running with Linux control groups version 1 (cgroups v1) or version 2 (cgroup v2) : cgroup v1: OpenShift Container Platform workloads can use any combination of memory and swap, up to the pod's memory limit, if set. cgroup v2: OpenShift Container Platform workloads cannot use swap memory. Unlimited: Use the UnlimitedSwap value to allow workloads to use as much swap memory as they request, up to the system limit. Because the kubelet will not start in the presence of swap memory without this configuration, you must enable swap memory in OpenShift Container Platform before enabling swap memory on the nodes. If there is no swap memory present on a node, enabling swap memory in OpenShift Container Platform has no effect. Prerequisites You have a running OpenShift Container Platform cluster that uses version 4.10 or later. You are logged in to the cluster as a user with administrative privileges. You have enabled the TechPreviewNoUpgrade feature set on the cluster (see Nodes Working with clusters Enabling features using feature gates ). Note Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters. If cgroup v2 is enabled on a node, you must enable swap accounting on the node, by setting the swapaccount=1 kernel argument. Procedure Apply a custom label to the machine config pool where you want to allow swap memory. USD oc label machineconfigpool worker kubelet-swap=enabled Create a custom resource (CR) to enable and configure swap settings. apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #... 1 Set to false to enable swap memory use on the associated nodes. Set to true to disable swap memory use. 2 Specify the swap memory behavior. If unspecified, the default is LimitedSwap . Enable swap memory on the machines. 6.3.7. Migrating control plane nodes from one RHOSP host to another manually If control plane machine sets are not enabled on your cluster, you can run a script that moves a control plane node from one Red Hat OpenStack Platform (RHOSP) node to another. Note Control plane machine sets are not enabled on clusters that run on user-provisioned infrastructure. For information about control plane machine sets, see "Managing control plane machines with control plane machine sets". Prerequisites The environment variable OS_CLOUD refers to a clouds entry that has administrative credentials in a clouds.yaml file. The environment variable KUBECONFIG refers to a configuration that contains administrative OpenShift Container Platform credentials. Procedure From a command line, run the following script: #!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo "Usage: 'USD0 node_name'" exit 64 fi # Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo "The script needs OpenStack admin credentials. Exiting"; exit 77; } # Check for admin OpenShift credentials oc adm top node >/dev/null || { >&2 echo "The script needs OpenShift admin credentials. Exiting"; exit 77; } set -x declare -r node_name="USD1" declare server_id server_id="USD(openstack server list --all-projects -f value -c ID -c Name | grep "USDnode_name" | cut -d' ' -f1)" readonly server_id # Drain the node oc adm cordon "USDnode_name" oc adm drain "USDnode_name" --delete-emptydir-data --ignore-daemonsets --force # Power off the server oc debug "node/USD{node_name}" -- chroot /host shutdown -h 1 # Verify the server is shut off until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Migrate the node openstack server migrate --wait "USDserver_id" # Resize the VM openstack server resize confirm "USDserver_id" # Wait for the resize confirm to finish until openstack server show "USDserver_id" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done # Restart the VM openstack server start "USDserver_id" # Wait for the node to show up as Ready: until oc get node "USDnode_name" | grep -q "^USD{node_name}[[:space:]]\+Ready"; do sleep 5; done # Uncordon the node oc adm uncordon "USDnode_name" # Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type "Degraded" }}{{ if ne .status "False" }}DEGRADED{{ end }}{{ else if eq .type "Progressing"}}{{ if ne .status "False" }}PROGRESSING{{ end }}{{ else if eq .type "Available"}}{{ if ne .status "True" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\(DEGRADED\|PROGRESSING\|NOTAVAILABLE\)'; do sleep 5; done If the script completes, the control plane machine is migrated to a new RHOSP node. Additional resources Managing control plane machines with control plane machine sets 6.4. Managing the maximum number of pods per node In OpenShift Container Platform, you can configure the number of pods that can run on a node based on the number of processor cores on the node, a hard limit or both. If you use both options, the lower of the two limits the number of pods on a node. When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: Increased CPU utilization. Slow pod scheduling. Potential out-of-memory scenarios, depending on the amount of memory in the node. Exhausting the pool of IP addresses. Resource overcommitting, leading to poor user application performance. Important In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. Note Disk IOPS throttling from the cloud provider might have an impact on CRI-O and kubelet. They might get overloaded when there are large number of I/O intensive pods running on the nodes. It is recommended that you monitor the disk I/O on the nodes and use volumes with sufficient throughput for the workload. The podsPerCore parameter sets the number of pods the node can run based on the number of processor cores on the node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40 . kubeletConfig: podsPerCore: 10 Setting podsPerCore to 0 disables this limit. The default is 0 . The value of the podsPerCore parameter cannot exceed the value of the maxPods parameter. The maxPods parameter sets the number of pods the node can run to a fixed value, regardless of the properties of the node. kubeletConfig: maxPods: 250 6.4.1. Configuring the maximum number of pods per node Two parameters control the maximum number of pods that can be scheduled to a node: podsPerCore and maxPods . If you use both options, the lower of the two limits the number of pods on a node. For example, if podsPerCore is set to 10 on a node with 4 processor cores, the maximum number of pods allowed on the node will be 40. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a max-pods CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the number of pods the node can run based on the number of processor cores on the node. 4 Specify the number of pods the node can run to a fixed value, regardless of the properties of the node. Note Setting podsPerCore to 0 disables this limit. In the above example, the default value for podsPerCore is 10 and the default value for maxPods is 250 . This means that unless the node has 25 cores or more, by default, podsPerCore will be the limiting factor. Run the following command to create the CR: USD oc create -f <file_name>.yaml Verification List the MachineConfigPool CRDs to see if the change is applied. The UPDATING column reports True if the change is picked up by the Machine Config Controller: USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False Once the change is complete, the UPDATED column reports True . USD oc get machineconfigpools Example output NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False 6.5. Using the Node Tuning Operator Learn about the Node Tuning Operator and how you can use it to manage node-level tuning by orchestrating the tuned daemon. Purpose The Node Tuning Operator helps you manage node-level tuning by orchestrating the TuneD daemon and achieves low latency performance by using the Performance Profile controller. The majority of high-performance applications require some level of kernel tuning. The Node Tuning Operator provides a unified management interface to users of node-level sysctls and more flexibility to add custom tuning specified by user needs. The Operator manages the containerized TuneD daemon for OpenShift Container Platform as a Kubernetes daemon set. It ensures the custom tuning specification is passed to all containerized TuneD daemons running in the cluster in the format that the daemons understand. The daemons run on all nodes in the cluster, one per node. Node-level settings applied by the containerized TuneD daemon are rolled back on an event that triggers a profile change or when the containerized TuneD daemon is terminated gracefully by receiving and handling a termination signal. The Node Tuning Operator uses the Performance Profile controller to implement automatic tuning to achieve low latency performance for OpenShift Container Platform applications. The cluster administrator configures a performance profile to define node-level settings such as the following: Updating the kernel to kernel-rt. Choosing CPUs for housekeeping. Choosing CPUs for running workloads. The Node Tuning Operator is part of a standard OpenShift Container Platform installation in version 4.1 and later. Note In earlier versions of OpenShift Container Platform, the Performance Addon Operator was used to implement automatic tuning to achieve low latency performance for OpenShift applications. In OpenShift Container Platform 4.11 and later, this functionality is part of the Node Tuning Operator. 6.5.1. Accessing an example Node Tuning Operator specification Use this process to access an example Node Tuning Operator specification. Procedure Run the following command to access an example Node Tuning Operator specification: oc get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator The default CR is meant for delivering standard node-level tuning for the OpenShift Container Platform platform and it can only be modified to set the Operator Management state. Any other custom changes to the default CR will be overwritten by the Operator. For custom tuning, create your own Tuned CRs. Newly created CRs will be combined with the default CR and custom tuning applied to OpenShift Container Platform nodes based on node or pod labels and profile priorities. Warning While in certain situations the support for pod labels can be a convenient way of automatically delivering required tuning, this practice is discouraged and strongly advised against, especially in large-scale clusters. The default Tuned CR ships without pod label matching. If a custom profile is created with pod label matching, then the functionality will be enabled at that time. The pod label functionality will be deprecated in future versions of the Node Tuning Operator. 6.5.2. Custom tuning specification The custom resource (CR) for the Operator has two major sections. The first section, profile: , is a list of TuneD profiles and their names. The second, recommend: , defines the profile selection logic. Multiple custom tuning specifications can co-exist as multiple CRs in the Operator's namespace. The existence of new CRs or the deletion of old CRs is detected by the Operator. All existing custom tuning specifications are merged and appropriate objects for the containerized TuneD daemons are updated. Management state The Operator Management state is set by adjusting the default Tuned CR. By default, the Operator is in the Managed state and the spec.managementState field is not present in the default Tuned CR. Valid values for the Operator Management state are as follows: Managed: the Operator will update its operands as configuration resources are updated Unmanaged: the Operator will ignore changes to the configuration resources Removed: the Operator will remove its operands and resources the Operator provisioned Profile data The profile: section lists TuneD profiles and their names. profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD # ... - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings Recommended profiles The profile: selection logic is defined by the recommend: section of the CR. The recommend: section is a list of items to recommend the profiles based on a selection criteria. recommend: <recommend-item-1> # ... <recommend-item-n> The individual items of the list: - machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9 1 Optional. 2 A dictionary of key/value MachineConfig labels. The keys must be unique. 3 If omitted, profile match is assumed unless a profile with a higher priority matches first or machineConfigLabels is set. 4 An optional list. 5 Profile ordering priority. Lower numbers mean higher priority ( 0 is the highest priority). 6 A TuneD profile to apply on a match. For example tuned_profile_1 . 7 Optional operand configuration. 8 Turn debugging on or off for the TuneD daemon. Options are true for on or false for off. The default is false . 9 Turn reapply_sysctl functionality on or off for the TuneD daemon. Options are true for on and false for off. <match> is an optional list recursively defined as follows: - label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4 1 Node or pod label name. 2 Optional node or pod label value. If omitted, the presence of <label_name> is enough to match. 3 Optional object type ( node or pod ). If omitted, node is assumed. 4 An optional <match> list. If <match> is not omitted, all nested <match> sections must also evaluate to true . Otherwise, false is assumed and the profile with the respective <match> section will not be applied or recommended. Therefore, the nesting (child <match> sections) works as logical AND operator. Conversely, if any item of the <match> list matches, the entire <match> list evaluates to true . Therefore, the list acts as logical OR operator. If machineConfigLabels is defined, machine config pool based matching is turned on for the given recommend: list item. <mcLabels> specifies the labels for a machine config. The machine config is created automatically to apply host settings, such as kernel boot parameters, for the profile <tuned_profile_name> . This involves finding all machine config pools with machine config selector matching <mcLabels> and setting the profile <tuned_profile_name> on all nodes that are assigned the found machine config pools. To target nodes that have both master and worker roles, you must use the master role. The list items match and machineConfigLabels are connected by the logical OR operator. The match item is evaluated first in a short-circuit manner. Therefore, if it evaluates to true , the machineConfigLabels item is not considered. Important When using machine config pool based matching, it is advised to group nodes with the same hardware configuration into the same machine config pool. Not following this practice might result in TuneD operands calculating conflicting kernel parameters for two or more nodes sharing the same machine config pool. Example: Node or pod label based matching - match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node The CR above is translated for the containerized TuneD daemon into its recommend.conf file based on the profile priorities. The profile with the highest priority ( 10 ) is openshift-control-plane-es and, therefore, it is considered first. The containerized TuneD daemon running on a given node looks to see if there is a pod running on the same node with the tuned.openshift.io/elasticsearch label set. If not, the entire <match> section evaluates as false . If there is such a pod with the label, in order for the <match> section to evaluate to true , the node label also needs to be node-role.kubernetes.io/master or node-role.kubernetes.io/infra . If the labels for the profile with priority 10 matched, openshift-control-plane-es profile is applied and no other profile is considered. If the node/pod label combination did not match, the second highest priority profile ( openshift-control-plane ) is considered. This profile is applied if the containerized TuneD pod runs on a node with labels node-role.kubernetes.io/master or node-role.kubernetes.io/infra . Finally, the profile openshift-node has the lowest priority of 30 . It lacks the <match> section and, therefore, will always match. It acts as a profile catch-all to set openshift-node profile, if no other profile with higher priority matches on a given node. Example: Machine config pool based matching apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: "worker-custom" priority: 20 profile: openshift-node-custom To minimize node reboots, label the target nodes with a label the machine config pool's node selector will match, then create the Tuned CR above and finally create the custom machine config pool itself. Cloud provider-specific TuneD profiles With this functionality, all Cloud provider-specific nodes can conveniently be assigned a TuneD profile specifically tailored to a given Cloud provider on a OpenShift Container Platform cluster. This can be accomplished without adding additional node labels or grouping nodes into machine config pools. This functionality takes advantage of spec.providerID node object values in the form of <cloud-provider>://<cloud-provider-specific-id> and writes the file /var/lib/ocp-tuned/provider with the value <cloud-provider> in NTO operand containers. The content of this file is then used by TuneD to load provider-<cloud-provider> profile if such profile exists. The openshift profile that both openshift-control-plane and openshift-node profiles inherit settings from is now updated to use this functionality through the use of conditional profile loading. Neither NTO nor TuneD currently include any Cloud provider-specific profiles. However, it is possible to create a custom profile provider-<cloud-provider> that will be applied to all Cloud provider-specific cluster nodes. Example GCE Cloud provider profile apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce Note Due to profile inheritance, any setting specified in the provider-<cloud-provider> profile will be overwritten by the openshift profile and its child profiles. 6.5.3. Default profiles set on a cluster The following are the default profiles set on a cluster. apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40 Starting with OpenShift Container Platform 4.9, all OpenShift TuneD profiles are shipped with the TuneD package. You can use the oc exec command to view the contents of these profiles: USD oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \; 6.5.4. Supported TuneD daemon plugins Excluding the [main] section, the following TuneD plugins are supported when using custom profiles defined in the profile: section of the Tuned CR: audio cpu disk eeepc_she modules mounts net scheduler scsi_host selinux sysctl sysfs usb video vm bootloader There is some dynamic tuning functionality provided by some of these plugins that is not supported. The following TuneD plugins are currently not supported: script systemd Note The TuneD bootloader plugin only supports Red Hat Enterprise Linux CoreOS (RHCOS) worker nodes. Additional resources Available TuneD Plugins Getting Started with TuneD 6.6. Remediating, fencing, and maintaining nodes When node-level failures occur, such as the kernel hangs or network interface controllers (NICs) fail, the work required from the cluster does not decrease, and workloads from affected nodes need to be restarted somewhere. Failures affecting these workloads risk data loss, corruption, or both. It is important to isolate the node, known as fencing , before initiating recovery of the workload, known as remediation , and recovery of the node. For more information on remediation, fencing, and maintaining nodes, see the Workload Availability for Red Hat OpenShift documentation. 6.7. Understanding node rebooting To reboot a node without causing an outage for applications running on the platform, it is important to first evacuate the pods. For pods that are made highly available by the routing tier, nothing else needs to be done. For other pods needing storage, typically databases, it is critical to ensure that they can remain in operation with one pod temporarily going offline. While implementing resiliency for stateful pods is different for each application, in all cases it is important to configure the scheduler to use node anti-affinity to ensure that the pods are properly spread across available nodes. Another challenge is how to handle nodes that are running critical infrastructure such as the router or the registry. The same node evacuation process applies, though it is important to understand certain edge cases. 6.7.1. About rebooting nodes running critical infrastructure When rebooting nodes that host critical OpenShift Container Platform infrastructure components, such as router pods, registry pods, and monitoring pods, ensure that there are at least three nodes available to run these components. The following scenario demonstrates how service interruptions can occur with applications running on OpenShift Container Platform when only two nodes are available: Node A is marked unschedulable and all pods are evacuated. The registry pod running on that node is now redeployed on node B. Node B is now running both registry pods. Node B is now marked unschedulable and is evacuated. The service exposing the two pod endpoints on node B loses all endpoints, for a brief period of time, until they are redeployed to node A. When using three nodes for infrastructure components, this process does not result in a service disruption. However, due to pod scheduling, the last node that is evacuated and brought back into rotation does not have a registry pod. One of the other nodes has two registry pods. To schedule the third registry pod on the last node, use pod anti-affinity to prevent the scheduler from locating two registry pods on the same node. Additional information For more information on pod anti-affinity, see Placing pods relative to other pods using affinity and anti-affinity rules . 6.7.2. Rebooting a node using pod anti-affinity Pod anti-affinity is slightly different than node anti-affinity. Node anti-affinity can be violated if there are no other suitable locations to deploy a pod. Pod anti-affinity can be set to either required or preferred. With this in place, if only two infrastructure nodes are available and one is rebooted, the container image registry pod is prevented from running on the other node. oc get pods reports the pod as unready until a suitable node is available. Once a node is available and all pods are back in ready state, the node can be restarted. Procedure To reboot a node using pod anti-affinity: Edit the node specification to configure pod anti-affinity: apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #... 1 Stanza to configure pod anti-affinity. 2 Defines a preferred rule. 3 Specifies a weight for a preferred rule. The node with the highest weight is preferred. 4 Description of the pod label that determines when the anti-affinity rule applies. Specify a key and value for the label. 5 The operator represents the relationship between the label on the existing pod and the set of values in the matchExpression parameters in the specification for the new pod. Can be In , NotIn , Exists , or DoesNotExist . This example assumes the container image registry pod has a label of registry=default . Pod anti-affinity can use any Kubernetes match expression. Enable the MatchInterPodAffinity scheduler predicate in the scheduling policy file. Perform a graceful restart of the node. 6.7.3. Understanding how to reboot nodes running routers In most cases, a pod running an OpenShift Container Platform router exposes a host port. The PodFitsPorts scheduler predicate ensures that no router pods using the same port can run on the same node, and pod anti-affinity is achieved. If the routers are relying on IP failover for high availability, there is nothing else that is needed. For router pods relying on an external service such as AWS Elastic Load Balancing for high availability, it is that service's responsibility to react to router pod restarts. In rare cases, a router pod may not have a host port configured. In those cases, it is important to follow the recommended restart process for infrastructure nodes. 6.7.4. Rebooting a node gracefully Before rebooting a node, it is recommended to backup etcd data to avoid any data loss on the node. Note For single-node OpenShift clusters that require users to perform the oc login command rather than having the certificates in kubeconfig file to manage the cluster, the oc adm commands might not be available after cordoning and draining the node. This is because the openshift-oauth-apiserver pod is not running due to the cordon. You can use SSH to access the nodes as indicated in the following procedure. In a single-node OpenShift cluster, pods cannot be rescheduled when cordoning and draining. However, doing so gives the pods, especially your workload pods, time to properly stop and release associated resources. Procedure To perform a graceful restart of a node: Mark the node as unschedulable: USD oc adm cordon <node1> Drain the node to remove all the running pods: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force You might receive errors that pods associated with custom pod disruption budgets (PDB) cannot be evicted. Example error error when evicting pods/"rails-postgresql-example-1-72v2w" -n "rails" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. In this case, run the drain command again, adding the disable-eviction flag, which bypasses the PDB checks: USD oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction Access the node in debug mode: USD oc debug node/<node1> Change your root directory to /host : USD chroot /host Restart the node: USD systemctl reboot In a moment, the node enters the NotReady state. Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and perform the reboot. USD ssh core@<master-node>.<cluster_name>.<base_domain> USD sudo systemctl reboot After the reboot is complete, mark the node as schedulable by running the following command: USD oc adm uncordon <node1> Note With some single-node OpenShift clusters, the oc commands might not be available after you cordon and drain the node because the openshift-oauth-apiserver pod is not running. You can use SSH to connect to the node and uncordon it. USD ssh core@<target_node> USD sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig Verify that the node is ready: USD oc get node <node1> Example output NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8 Additional information For information on etcd data backup, see Backing up etcd data . 6.8. Freeing node resources using garbage collection As an administrator, you can use OpenShift Container Platform to ensure that your nodes are running efficiently by freeing up resources through garbage collection. The OpenShift Container Platform node performs two types of garbage collection: Container garbage collection: Removes terminated containers. Image garbage collection: Removes images not referenced by any running pods. 6.8.1. Understanding how terminated containers are removed through garbage collection Container garbage collection removes terminated containers by using eviction thresholds. When eviction thresholds are set for garbage collection, the node tries to keep any container for any pod accessible from the API. If the pod has been deleted, the containers will be as well. Containers are preserved as long the pod is not deleted and the eviction threshold is not reached. If the node is under disk pressure, it will remove containers and their logs will no longer be accessible using oc logs . eviction-soft - A soft eviction threshold pairs an eviction threshold with a required administrator-specified grace period. eviction-hard - A hard eviction threshold has no grace period, and if observed, OpenShift Container Platform takes immediate action. The following table lists the eviction thresholds: Table 6.2. Variables for configuring container garbage collection Node condition Eviction signal Description MemoryPressure memory.available The available memory on the node. DiskPressure nodefs.available nodefs.inodesFree imagefs.available imagefs.inodesFree The available disk space or inodes on the node root file system, nodefs , or image file system, imagefs . Note For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. If a node is oscillating above and below a soft eviction threshold, but not exceeding its associated grace period, the corresponding node would constantly oscillate between true and false . As a consequence, the scheduler could make poor scheduling decisions. To protect against this oscillation, use the evictionpressure-transition-period flag to control how long OpenShift Container Platform must wait before transitioning out of a pressure condition. OpenShift Container Platform will not set an eviction threshold as being met for the specified pressure condition for the period specified before toggling the condition back to false. Note Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. You cannot set an eviction pressure transition period to zero seconds. 6.8.2. Understanding how images are removed through garbage collection Image garbage collection removes images that are not referenced by any running pods. OpenShift Container Platform determines which images to remove from a node based on the disk usage that is reported by cAdvisor . The policy for image garbage collection is based on two conditions: The percent of disk usage (expressed as an integer) which triggers image garbage collection. The default is 85 . The percent of disk usage (expressed as an integer) to which image garbage collection attempts to free. Default is 80 . For image garbage collection, you can modify any of the following variables using a custom resource. Table 6.3. Variables for configuring image garbage collection Setting Description imageMinimumGCAge The minimum age for an unused image before the image is removed by garbage collection. The default is 2m . imageGCHighThresholdPercent The percent of disk usage, expressed as an integer, which triggers image garbage collection. The default is 85 . This value must be greater than the imageGCLowThresholdPercent value. imageGCLowThresholdPercent The percent of disk usage, expressed as an integer, to which image garbage collection attempts to free. The default is 80 . This value must be less than the imageGCHighThresholdPercent value. Two lists of images are retrieved in each garbage collector run: A list of images currently running in at least one pod. A list of images available on a host. As new containers are run, new images appear. All images are marked with a time stamp. If the image is running (the first list above) or is newly detected (the second list above), it is marked with the current time. The remaining images are already marked from the spins. All images are then sorted by the time stamp. Once the collection starts, the oldest images get deleted first until the stopping criterion is met. 6.8.3. Configuring garbage collection for containers and images As an administrator, you can configure how OpenShift Container Platform performs garbage collection by creating a kubeletConfig object for each machine config pool. Note OpenShift Container Platform supports only one kubeletConfig object for each machine config pool. You can configure any combination of the following: Soft eviction for containers Hard eviction for containers Eviction for images Container garbage collection removes terminated containers. Image garbage collection removes images that are not referenced by any running pods. Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Important If there is one file system, or if /var/lib/kubelet and /var/lib/containers/ are in the same file system, the settings with the highest values trigger evictions, as those are met first. The file system triggers the eviction. Sample configuration for a container garbage collection CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: evictionSoft: 3 memory.available: "500Mi" 4 nodefs.available: "10%" nodefs.inodesFree: "5%" imagefs.available: "15%" imagefs.inodesFree: "10%" evictionSoftGracePeriod: 5 memory.available: "1m30s" nodefs.available: "1m30s" nodefs.inodesFree: "1m30s" imagefs.available: "1m30s" imagefs.inodesFree: "1m30s" evictionHard: 6 memory.available: "200Mi" nodefs.available: "5%" nodefs.inodesFree: "4%" imagefs.available: "10%" imagefs.inodesFree: "5%" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #... 1 Name for the object. 2 Specify the label from the machine config pool. 3 For container garbage collection: Type of eviction: evictionSoft or evictionHard . 4 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. 5 For container garbage collection: Grace periods for the soft eviction. This parameter does not apply to eviction-hard . 6 For container garbage collection: Eviction thresholds based on a specific eviction trigger signal. For evictionHard you must specify all of these parameters. If you do not specify all parameters, only the specified parameters are applied and the garbage collection will not function properly. 7 For container garbage collection: The duration to wait before transitioning out of an eviction pressure condition. Setting the evictionPressureTransitionPeriod parameter to 0 configures the default value of 5 minutes. 8 For image garbage collection: The minimum age for an unused image before the image is removed by garbage collection. 9 For image garbage collection: Image garbage collection is triggered at the specified percent of disk usage (expressed as an integer). This value must be greater than the imageGCLowThresholdPercent value. 10 For image garbage collection: Image garbage collection attempts to free resources to the specified percent of disk usage (expressed as an integer). This value must be less than the imageGCHighThresholdPercent value. Run the following command to create the CR: USD oc create -f <file_name>.yaml For example: USD oc create -f gc-container.yaml Example output kubeletconfig.machineconfiguration.openshift.io/gc-container created Verification Verify that garbage collection is active by entering the following command. The Machine Config Pool you specified in the custom resource appears with UPDATING as 'true` until the change is fully implemented: USD oc get machineconfigpool Example output NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True 6.9. Allocating resources for nodes in an OpenShift Container Platform cluster To provide more reliable scheduling and minimize node resource overcommitment, reserve a portion of the CPU and memory resources for use by the underlying node components, such as kubelet and kube-proxy , and the remaining system components, such as sshd and NetworkManager . By specifying the resources to reserve, you provide the scheduler with more information about the remaining CPU and memory resources that a node has available for use by pods. You can allow OpenShift Container Platform to automatically determine the optimal system-reserved CPU and memory resources for your nodes or you can manually determine and set the best resources for your nodes. Important To manually set resource values, you must use a kubelet config CR. You cannot use a machine config CR. 6.9.1. Understanding how to allocate resources for nodes CPU and memory resources reserved for node components in OpenShift Container Platform are based on two node settings: Setting Description kube-reserved This setting is not used with OpenShift Container Platform. Add the CPU and memory resources that you planned to reserve to the system-reserved setting. system-reserved This setting identifies the resources to reserve for the node components and system components, such as CRI-O and Kubelet. The default settings depend on the OpenShift Container Platform and Machine Config Operator versions. Confirm the default systemReserved parameter on the machine-config-operator repository. If a flag is not set, the defaults are used. If none of the flags are set, the allocated resource is set to the node's capacity as it was before the introduction of allocatable resources. Note Any CPUs specifically reserved using the reservedSystemCPUs parameter are not available for allocation using kube-reserved or system-reserved . 6.9.1.1. How OpenShift Container Platform computes allocated resources An allocated amount of a resource is computed based on the following formula: Note The withholding of Hard-Eviction-Thresholds from Allocatable improves system reliability because the value for Allocatable is enforced for pods at the node level. If Allocatable is negative, it is set to 0 . Each node reports the system resources that are used by the container runtime and kubelet. To simplify configuring the system-reserved parameter, view the resource use for the node by using the node summary API. The node summary is available at /api/v1/nodes/<node>/proxy/stats/summary . 6.9.1.2. How nodes enforce resource constraints The node is able to limit the total amount of resources that pods can consume based on the configured allocatable value. This feature significantly improves the reliability of the node by preventing pods from using CPU and memory resources that are needed by system services such as the container runtime and node agent. To improve node reliability, administrators should reserve resources based on a target for resource use. The node enforces resource constraints by using a new cgroup hierarchy that enforces quality of service. All pods are launched in a dedicated cgroup hierarchy that is separate from system daemons. Administrators should treat system daemons similar to pods that have a guaranteed quality of service. System daemons can burst within their bounding control groups and this behavior must be managed as part of cluster deployments. Reserve CPU and memory resources for system daemons by specifying the amount of CPU and memory resources in system-reserved . Enforcing system-reserved limits can prevent critical system services from receiving CPU and memory resources. As a result, a critical system service can be ended by the out-of-memory killer. The recommendation is to enforce system-reserved only if you have profiled the nodes exhaustively to determine precise estimates and you are confident that critical system services can recover if any process in that group is ended by the out-of-memory killer. 6.9.1.3. Understanding Eviction Thresholds If a node is under memory pressure, it can impact the entire node and all pods running on the node. For example, a system daemon that uses more than its reserved amount of memory can trigger an out-of-memory event. To avoid or reduce the probability of system out-of-memory events, the node provides out-of-resource handling. You can reserve some memory using the --eviction-hard flag. The node attempts to evict pods whenever memory availability on the node drops below the absolute value or percentage. If system daemons do not exist on a node, pods are limited to the memory capacity - eviction-hard . For this reason, resources set aside as a buffer for eviction before reaching out of memory conditions are not available for pods. The following is an example to illustrate the impact of node allocatable for memory: Node capacity is 32Gi --system-reserved is 3Gi --eviction-hard is set to 100Mi . For this node, the effective node allocatable value is 28.9Gi . If the node and system components use all their reservation, the memory available for pods is 28.9Gi , and kubelet evicts pods when it exceeds this threshold. If you enforce node allocatable, 28.9Gi , with top-level cgroups, then pods can never exceed 28.9Gi . Evictions are not performed unless system daemons consume more than 3.1Gi of memory. If system daemons do not use up all their reservation, with the above example, pods would face memcg OOM kills from their bounding cgroup before node evictions kick in. To better enforce QoS under this situation, the node applies the hard eviction thresholds to the top-level cgroup for all pods to be Node Allocatable + Eviction Hard Thresholds . If system daemons do not use up all their reservation, the node will evict pods whenever they consume more than 28.9Gi of memory. If eviction does not occur in time, a pod will be OOM killed if pods consume 29Gi of memory. 6.9.1.4. How the scheduler determines resource availability The scheduler uses the value of node.Status.Allocatable instead of node.Status.Capacity to decide if a node will become a candidate for pod scheduling. By default, the node will report its machine capacity as fully schedulable by the cluster. 6.9.2. Automatically allocating resources for nodes OpenShift Container Platform can automatically determine the optimal system-reserved CPU and memory resources for nodes associated with a specific machine config pool and update the nodes with those values when the nodes start. By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . To automatically determine and allocate the system-reserved resources on nodes, create a KubeletConfig custom resource (CR) to set the autoSizingReserved: true parameter. A script on each node calculates the optimal values for the respective reserved resources based on the installed CPU and memory capacity on each node. The script takes into account that increased capacity requires a corresponding increase in the reserved resources. Automatically determining the optimal system-reserved settings ensures that your cluster is running efficiently and prevents node failure due to resource starvation of system components, such as CRI-O and kubelet, without your needing to manually calculate and update the values. This feature is disabled by default. Prerequisites Obtain the label associated with the static MachineConfigPool object for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels . Tip If an appropriate label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change: Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Assign a name to CR. 2 Add the autoSizingReserved parameter set to true to allow OpenShift Container Platform to automatically determine and allocate the system-reserved resources on the nodes associated with the specified label. To disable automatic allocation on those nodes, set this parameter to false . 3 Specify the label from the machine config pool that you configured in the "Prerequisites" section. You can choose any desired labels for the machine config pool, such as custom-kubelet: small-pods , or the default label, pools.operator.machineconfiguration.openshift.io/worker: "" . The example enables automatic resource allocation on all worker nodes. OpenShift Container Platform drains the nodes, applies the kubelet config, and restarts the nodes. Create the CR by entering the following command: USD oc create -f <file_name>.yaml Verification Log in to a node you configured by entering the following command: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: # chroot /host View the /etc/node-sizing.env file: Example output SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08 The kubelet uses the system-reserved values in the /etc/node-sizing.env file. In the example, the worker nodes are allocated 0.08 CPU and 3 Gi of memory. It can take several minutes for the optimal values to appear. 6.9.3. Manually allocating resources for nodes OpenShift Container Platform supports the CPU and memory resource types for allocation. The ephemeral-resource resource type is also supported. For the cpu type, you specify the resource quantity in units of cores, such as 200m , 0.5 , or 1 . For memory and ephemeral-storage , you specify the resource quantity in units of bytes, such as 200Ki , 50Mi , or 5Gi . By default, the system-reserved CPU is 500m and system-reserved memory is 1Gi . As an administrator, you can set these values by using a kubelet config custom resource (CR) through a set of <resource_type>=<resource_quantity> pairs (e.g., cpu=200m,memory=512Mi ). Important You must use a kubelet config CR to manually set resource values. You cannot use a machine config CR. For details on the recommended system-reserved values, refer to the recommended system-reserved values . Prerequisites Obtain the label associated with the static MachineConfigPool CRD for the type of node you want to configure by entering the following command: USD oc edit machineconfigpool <name> For example: USD oc edit machineconfigpool worker Example output apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: "2022-11-16T15:34:25Z" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: "" 1 name: worker #... 1 The label appears under Labels. Tip If the label is not present, add a key/value pair such as: Procedure Create a custom resource (CR) for your configuration change. Sample configuration for a resource allocation CR apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #... 1 Assign a name to CR. 2 Specify the label from the machine config pool. 3 Specify the resources to reserve for the node components and system components. Run the following command to create the CR: USD oc create -f <file_name>.yaml 6.10. Allocating specific CPUs for nodes in a cluster When using the static CPU Manager policy , you can reserve specific CPUs for use by specific nodes in your cluster. For example, on a system with 24 CPUs, you could reserve CPUs numbered 0 - 3 for the control plane allowing the compute nodes to use CPUs 4 - 23. 6.10.1. Reserving CPUs for nodes To explicitly define a list of CPUs that are reserved for specific nodes, create a KubeletConfig custom resource (CR) to define the reservedSystemCPUs parameter. This list supersedes the CPUs that might be reserved using the systemReserved parameter. Procedure Obtain the label associated with the machine config pool (MCP) for the type of node you want to configure: USD oc describe machineconfigpool <name> For example: USD oc describe machineconfigpool worker Example output Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #... 1 Get the MCP label. Create a YAML file for the KubeletConfig CR: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: "0,1,2,3" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 3 #... 1 Specify a name for the CR. 2 Specify the core IDs of the CPUs you want to reserve for the nodes associated with the MCP. 3 Specify the label from the MCP. Create the CR object: USD oc create -f <file_name>.yaml Additional resources For more information on the systemReserved parameter, see Allocating resources for nodes in an OpenShift Container Platform cluster . 6.11. Enabling TLS security profiles for the kubelet You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by the kubelet when it is acting as an HTTP server. The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. A TLS security profile defines the TLS ciphers that the Kubernetes API server must use when connecting with the kubelet to protect communication between the kubelet and the Kubernetes API server. Note By default, when the kubelet acts as a client with the Kubernetes API server, it automatically negotiates the TLS parameters with the API server. 6.11.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 6.4. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 6.11.2. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig # ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" # ... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #... 6.12. Creating infrastructure nodes Important You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API. Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation. To view the platform type for your cluster, run the following command: USD oc get infrastructure cluster -o jsonpath='{.status.platform}' You can use infrastructure machine sets to create machines that host only infrastructure components, such as the default router, the integrated container image registry, and the components for cluster metrics and monitoring. These infrastructure machines are not counted toward the total number of subscriptions that are required to run the environment. In a production deployment, it is recommended that you deploy at least three machine sets to hold infrastructure components. Both OpenShift Logging and Red Hat OpenShift Service Mesh deploy Elasticsearch, which requires three instances to be installed on different nodes. Each of these nodes can be deployed to different availability zones for high availability. This configuration requires three different machine sets, one for each availability zone. In global Azure regions that do not have multiple availability zones, you can use availability sets to ensure high availability. Note After adding the NoSchedule taint on the infrastructure node, existing DNS pods running on that node are marked as misscheduled . You must either delete or add toleration on misscheduled DNS pods . 6.12.1. OpenShift Container Platform infrastructure components Each self-managed Red Hat OpenShift subscription includes entitlements for OpenShift Container Platform and other OpenShift-related components. These entitlements are included for running OpenShift Container Platform control plane and infrastructure workloads and do not need to be accounted for during sizing. To qualify as an infrastructure node and use the included entitlement, only components that are supporting the cluster, and not part of an end-user application, can run on those instances. Examples include the following components: Kubernetes and OpenShift Container Platform control plane services The default router The integrated container image registry The HAProxy-based Ingress Controller The cluster metrics collection, or monitoring service, including components for monitoring user-defined projects Cluster aggregated logging Red Hat Quay Red Hat OpenShift Data Foundation Red Hat Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift GitOps Red Hat OpenShift Pipelines Red Hat OpenShift Service Mesh Any node that runs any other container, pod, or component is a worker node that your subscription must cover. For information about infrastructure nodes and which components can run on infrastructure nodes, see the "Red Hat OpenShift control plane and infrastructure nodes" section in the OpenShift sizing and subscription guide for enterprise Kubernetes document. To create an infrastructure node, you can use a machine set , label the node , or use a machine config pool . 6.12.1.1. Creating an infrastructure node Important See Creating infrastructure machine sets for installer-provisioned infrastructure environments or for any cluster where the control plane nodes are managed by the machine API. Requirements of the cluster dictate that infrastructure, also called infra nodes, be provisioned. The installer only provides provisions for control plane and worker nodes. Worker nodes can be designated as infrastructure nodes or application, also called app , nodes through labeling. Procedure Add a label to the worker node that you want to act as application node: USD oc label node <node-name> node-role.kubernetes.io/app="" Add a label to the worker nodes that you want to act as infrastructure nodes: USD oc label node <node-name> node-role.kubernetes.io/infra="" Check to see if applicable nodes now have the infra role and app roles: USD oc get nodes Create a default cluster-wide node selector. The default node selector is applied to pods created in all namespaces. This creates an intersection with any existing node selectors on a pod, which additionally constrains the pod's selector. Important If the default node selector key conflicts with the key of a pod's label, then the default node selector is not applied. However, do not set a default node selector that might cause a pod to become unschedulable. For example, setting the default node selector to a specific node role, such as node-role.kubernetes.io/infra="" , when a pod's label is set to a different node role, such as node-role.kubernetes.io/master="" , can cause the pod to become unschedulable. For this reason, use caution when setting the default node selector to specific node roles. You can alternatively use a project node selector to avoid cluster-wide node selector key conflicts. Edit the Scheduler object: USD oc edit scheduler cluster Add the defaultNodeSelector field with the appropriate node selector: apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra="" 1 # ... 1 This example node selector deploys pods on infrastructure nodes by default. Save the file to apply the changes. You can now move infrastructure resources to the newly labeled infra nodes. Additional resources Moving resources to infrastructure machine sets
[ "oc get nodes", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.29.4 node1.example.com Ready worker 7h v1.29.4 node2.example.com Ready worker 7h v1.29.4", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.29.4 node1.example.com NotReady,SchedulingDisabled worker 7h v1.29.4 node2.example.com Ready worker 7h v1.29.4", "oc get nodes -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.29.4 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.29.4-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.29.4 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.29.4-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.29.4 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.29.4-30.rhaos4.10.gitf2f339d.el8-dev", "oc get node <node>", "oc get node node1.example.com", "NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.29.4", "oc describe node <node>", "oc describe node node1.example.com", "Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.29.4-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.29.4 Kube-Proxy Version: v1.29.4 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-t4dsn 100m (6%) 0 (0%) 300Mi (4%) 0 (0%) openshift-sdn sdn-g79hg 100m (6%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #", "oc get pod --selector=<nodeSelector>", "oc get pod --selector=kubernetes.io/os", "oc get pod -l=<nodeSelector>", "oc get pod -l kubernetes.io/os=linux", "oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%", "oc adm top node --selector=''", "oc adm cordon <node1>", "node/<node1> cordoned", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.29.4", "oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]", "oc adm drain <node1> <node2> --force=true", "oc adm drain <node1> <node2> --grace-period=-1", "oc adm drain <node1> <node2> --ignore-daemonsets=true", "oc adm drain <node1> <node2> --timeout=5s", "oc adm drain <node1> <node2> --delete-emptydir-data=true", "oc adm drain <node1> <node2> --dry-run=true", "oc adm uncordon <node1>", "oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>", "oc label nodes webconsole-7f7f6 unhealthy=true", "kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #", "oc label pods --all <key_1>=<value_1>", "oc label pods --all status=unhealthy", "oc adm cordon <node>", "oc adm cordon node1.example.com", "node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled", "oc adm uncordon <node1>", "oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE>", "oc get machinesets -n openshift-machine-api", "oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api", "oc edit machineset <machine-set-name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # name: <machine-set-name> namespace: openshift-machine-api # spec: replicas: 2 1 #", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get machineconfigpool --show-labels", "NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False", "oc label machineconfigpool worker custom-kubelet=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #", "oc create -f <file-name>", "oc create -f master-kube-config.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1", "oc edit MachineConfiguration cluster", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: \"true\" 2", "oc label machineset.machine ci-ln-hmy310k-72292-5f87z-worker-a update-boot-image=true -n openshift-machine-api", "oc get machinesets <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: \"true\" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1", "oc edit MachineConfiguration cluster", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All", "oc edit schedulers.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #", "oc create -f 99-worker-setsebool.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3", "oc create -f 05-worker-kernelarg-selinuxpermissive.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.29.4 ip-10-0-136-243.ec2.internal Ready master 34m v1.29.4 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.29.4 ip-10-0-142-249.ec2.internal Ready master 34m v1.29.4 ip-10-0-153-11.ec2.internal Ready worker 28m v1.29.4 ip-10-0-153-150.ec2.internal Ready master 34m v1.29.4", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit", "oc label machineconfigpool worker kubelet-swap=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #", "#!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo \"Usage: 'USD0 node_name'\" exit 64 fi Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo \"The script needs OpenStack admin credentials. Exiting\"; exit 77; } Check for admin OpenShift credentials adm top node >/dev/null || { >&2 echo \"The script needs OpenShift admin credentials. Exiting\"; exit 77; } set -x declare -r node_name=\"USD1\" declare server_id server_id=\"USD(openstack server list --all-projects -f value -c ID -c Name | grep \"USDnode_name\" | cut -d' ' -f1)\" readonly server_id Drain the node adm cordon \"USDnode_name\" adm drain \"USDnode_name\" --delete-emptydir-data --ignore-daemonsets --force Power off the server debug \"node/USD{node_name}\" -- chroot /host shutdown -h 1 Verify the server is shut off until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Migrate the node openstack server migrate --wait \"USDserver_id\" Resize the VM openstack server resize confirm \"USDserver_id\" Wait for the resize confirm to finish until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Restart the VM openstack server start \"USDserver_id\" Wait for the node to show up as Ready: until oc get node \"USDnode_name\" | grep -q \"^USD{node_name}[[:space:]]\\+Ready\"; do sleep 5; done Uncordon the node adm uncordon \"USDnode_name\" Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type \"Degraded\" }}{{ if ne .status \"False\" }}DEGRADED{{ end }}{{ else if eq .type \"Progressing\"}}{{ if ne .status \"False\" }}PROGRESSING{{ end }}{{ else if eq .type \"Available\"}}{{ if ne .status \"True\" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\\(DEGRADED\\|PROGRESSING\\|NOTAVAILABLE\\)'; do sleep 5; done", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #", "oc adm cordon <node1>", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force", "error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction", "oc debug node/<node1>", "chroot /host", "systemctl reboot", "ssh core@<master-node>.<cluster_name>.<base_domain>", "sudo systemctl reboot", "oc adm uncordon <node1>", "ssh core@<target_node>", "sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "oc debug node/<node_name>", "chroot /host", "SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #", "oc create -f <file_name>.yaml", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/nodes/working-with-nodes
6.6. Resource Operations
6.6. Resource Operations To ensure that resources remain healthy, you can add a monitoring operation to a resource's definition. If you do not specify a monitoring operation for a resource, by default the pcs command will create a monitoring operation, with an interval that is determined by the resource agent. If the resource agent does not provide a default monitoring interval, the pcs command will create a monitoring operation with an interval of 60 seconds. Table 6.4, "Properties of an Operation" summarizes the properties of a resource monitoring operation. Table 6.4. Properties of an Operation Field Description id Unique name for the action. The system assigns this when you configure an operation. name The action to perform. Common values: monitor , start , stop interval If set to a nonzero value, a recurring operation is created that repeats at this frequency, in seconds. A nonzero value makes sense only when the action name is set to monitor . A recurring monitor action will be executed immediately after a resource start completes, and subsequent monitor actions are scheduled starting at the time the monitor action completed. For example, if a monitor action with interval=20s is executed at 01:00:00, the monitor action does not occur at 01:00:20, but at 20 seconds after the first monitor action completes. If set to zero, which is the default value, this parameter allows you to provide values to be used for operations created by the cluster. For example, if the interval is set to zero, the name of the operation is set to start , and the timeout value is set to 40, then Pacemaker will use a timeout of 40 seconds when starting this resource. A monitor operation with a zero interval allows you to set the timeout / on-fail / enabled values for the probes that Pacemaker does at startup to get the current status of all resources when the defaults are not desirable. timeout If the operation does not complete in the amount of time set by this parameter, abort the operation and consider it failed. The default value is the value of timeout if set with the pcs resource op defaults command, or 20 seconds if it is not set. If you find that your system includes a resource that requires more time than the system allows to perform an operation (such as start , stop , or monitor ), investigate the cause and if the lengthy execution time is expected you can increase this value. The timeout value is not a delay of any kind, nor does the cluster wait the entire timeout period if the operation returns before the timeout period has completed. on-fail The action to take if this action ever fails. Allowed values: * ignore - Pretend the resource did not fail * block - Do not perform any further operations on the resource * stop - Stop the resource and do not start it elsewhere * restart - Stop the resource and start it again (possibly on a different node) * fence - STONITH the node on which the resource failed * standby - Move all resources away from the node on which the resource failed The default for the stop operation is fence when STONITH is enabled and block otherwise. All other operations default to restart . enabled If false , the operation is treated as if it does not exist. Allowed values: true , false 6.6.1. Configuring Resource Operations You can configure monitoring operations when you create a resource, using the following command. For example, the following command creates an IPaddr2 resource with a monitoring operation. The new resource is called VirtualIP with an IP address of 192.168.0.99 and a netmask of 24 on eth2 . A monitoring operation will be performed every 30 seconds. Alternately, you can add a monitoring operation to an existing resource with the following command. Use the following command to delete a configured resource operation. Note You must specify the exact operation properties to properly remove an existing operation. To change the values of a monitoring option, you can update the resource. For example, you can create a VirtualIP with the following command. By default, this command creates these operations. To change the stop timeout operation, execute the following command. Note When you update a resource's operation with the pcs resource update command, any options you do not specifically call out are reset to their default values. 6.6.2. Configuring Global Resource Operation Defaults You can use the following command to set global default values for monitoring operations. For example, the following command sets a global default of a timeout value of 240 seconds for all monitoring operations. To display the currently configured default values for monitoring operations, do not specify any options when you execute the pcs resource op defaults command. For example, following command displays the default monitoring operation values for a cluster which has been configured with a timeout value of 240 seconds. Note that a cluster resource will use the global default only when the option is not specified in the cluster resource definition. By default, resource agents define the timeout option for all operations. For the global operation timeout value to be honored, you must create the cluster resource without the timeout option explicitly or you must remove the timeout option by updating the cluster resource, as in the following command. For example, after setting a global default of a timeout value of 240 seconds for all monitoring operations and updating the cluster resource VirtualIP to remove the timeout value for the monitor operation, the resource VirtualIP will then have timeout values for start , stop , and monitor operations of 20s, 40s and 240s, respectively. The global default value for timeout operations is applied here only on the monitor operation, where the default timeout option was removed by the command.
[ "pcs resource create resource_id standard:provider:type|type [ resource_options ] [op operation_action operation_options [ operation_type operation_options ]...]", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2 op monitor interval=30s", "pcs resource op add resource_id operation_action [ operation_properties ]", "pcs resource op remove resource_id operation_name operation_properties", "pcs resource create VirtualIP ocf:heartbeat:IPaddr2 ip=192.168.0.99 cidr_netmask=24 nic=eth2", "Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) stop interval=0s timeout=20s (VirtualIP-stop-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s)", "pcs resource update VirtualIP op stop interval=0s timeout=40s pcs resource show VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s timeout=20s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)", "pcs resource op defaults [ options ]", "pcs resource op defaults timeout=240s", "pcs resource op defaults timeout: 240s", "pcs resource update VirtualIP op monitor interval=10s", "pcs resource show VirtualIP Resource: VirtualIP (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.0.99 cidr_netmask=24 nic=eth2 Operations: start interval=0s timeout=20s (VirtualIP-start-timeout-20s) monitor interval=10s (VirtualIP-monitor-interval-10s) stop interval=0s timeout=40s (VirtualIP-name-stop-interval-0s-timeout-40s)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-resourceoperate-haar
function::stp_pid
function::stp_pid Name function::stp_pid - The process id of the stapio process Synopsis Arguments None Description This function returns the process id of the stapio process that launched this script. There could be other SystemTap scripts and stapio processes running on the system.
[ "stp_pid:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-stp-pid
17.3. Network Address Translation
17.3. Network Address Translation By default, virtual network switches operate in NAT mode. They use IP masquerading rather than Source-NAT (SNAT) or Destination-NAT (DNAT). IP masquerading enables connected guests to use the host physical machine IP address for communication to any external network. By default, computers that are placed externally to the host physical machine cannot communicate to the guests inside when the virtual network switch is operating in NAT mode, as shown in the following diagram: Figure 17.3. Virtual network switch using NAT with two guests Warning Virtual network switches use NAT configured by iptables rules. Editing these rules while the switch is running is not recommended, as incorrect rules may result in the switch being unable to communicate. If the switch is not running, you can set the public IP range for forward mode NAT in order to create a port masquerading range by running:
[ "iptables -j SNAT --to-source [start]-[end]" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-Virtual_Networking-Network_Address_Translation
Chapter 78. KafkaClientAuthenticationOAuth schema reference
Chapter 78. KafkaClientAuthenticationOAuth schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationOAuth schema properties To configure OAuth client authentication, set the type property to oauth . OAuth authentication can be configured using one of the following options: Client ID and secret Client ID and refresh token Access token Username and password TLS Client ID and secret You can configure the address of your authorization server in the tokenEndpointUri property together with the client ID and client secret used in authentication. The OAuth client will connect to the OAuth server, authenticate using the client ID and secret and get an access token which it will use to authenticate with the Kafka broker. In the clientSecret property, specify a link to a Secret containing the client secret. An example of OAuth client authentication using client ID and client secret authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret Optionally, scope and audience can be specified if needed. Client ID and refresh token You can configure the address of your OAuth server in the tokenEndpointUri property together with the OAuth client ID and refresh token. The OAuth client will connect to the OAuth server, authenticate using the client ID and refresh token and get an access token which it will use to authenticate with the Kafka broker. In the refreshToken property, specify a link to a Secret containing the refresh token. An example of OAuth client authentication using client ID and refresh token authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token Access token You can configure the access token used for authentication with the Kafka broker directly. In this case, you do not specify the tokenEndpointUri . In the accessToken property, specify a link to a Secret containing the access token. An example of OAuth client authentication using only an access token authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token Username and password OAuth username and password configuration uses the OAuth Resource Owner Password Grant mechanism. The mechanism is deprecated, and is only supported to enable integration in environments where client credentials (ID and secret) cannot be used. You might need to use user accounts if your access management system does not support another approach or user accounts are required for authentication. A typical approach is to create a special user account in your authorization server that represents your client application. You then give the account a long randomly generated password and a very limited set of permissions. For example, the account can only connect to your Kafka cluster, but is not allowed to use any other services or login to the user interface. Consider using a refresh token mechanism first. You can configure the address of your authorization server in the tokenEndpointUri property together with the client ID, username and the password used in authentication. The OAuth client will connect to the OAuth server, authenticate using the username, the password, the client ID, and optionally even the client secret to obtain an access token which it will use to authenticate with the Kafka broker. In the passwordSecret property, specify a link to a Secret containing the password. Normally, you also have to configure a clientId using a public OAuth client. If you are using a confidential OAuth client, you also have to configure a clientSecret . An example of OAuth client authentication using username and a password with a public client authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-public-client-id An example of OAuth client authentication using a username and a password with a confidential client authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-confidential-client-id clientSecret: secretName: my-confidential-client-oauth-secret key: client-secret Optionally, scope and audience can be specified if needed. TLS Accessing the OAuth server using the HTTPS protocol does not require any additional configuration as long as the TLS certificates used by it are signed by a trusted certification authority and its hostname is listed in the certificate. If your OAuth server is using certificates which are self-signed or are signed by a certification authority which is not trusted, you can configure a list of trusted certificates in the custom resource. The tlsTrustedCertificates property contains a list of secrets with key names under which the certificates are stored. The certificates must be stored in X509 format. An example of TLS certificates provided authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt The OAuth client will by default verify that the hostname of your OAuth server matches either the certificate subject or one of the alternative DNS names. If it is not required, you can disable the hostname verification. An example of disabled TLS hostname verification authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true 78.1. KafkaClientAuthenticationOAuth schema properties The type property is a discriminator that distinguishes use of the KafkaClientAuthenticationOAuth type from KafkaClientAuthenticationTls , KafkaClientAuthenticationScramSha256 , KafkaClientAuthenticationScramSha512 , KafkaClientAuthenticationPlain . It must have the value oauth for the type KafkaClientAuthenticationOAuth . Property Property type Description accessToken GenericSecretSource Link to OpenShift Secret containing the access token which was obtained from the authorization server. accessTokenIsJwt boolean Configure whether access token should be treated as JWT. This should be set to false if the authorization server returns opaque tokens. Defaults to true . audience string OAuth audience to use when authenticating against the authorization server. Some authorization servers require the audience to be explicitly set. The possible values depend on how the authorization server is configured. By default, audience is not specified when performing the token endpoint request. clientId string OAuth Client ID which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. clientSecret GenericSecretSource Link to OpenShift Secret containing the OAuth client secret which the Kafka client can use to authenticate against the OAuth server and use the token endpoint URI. connectTimeoutSeconds integer The connect timeout in seconds when connecting to authorization server. If not set, the effective connect timeout is 60 seconds. disableTlsHostnameVerification boolean Enable or disable TLS hostname verification. Default value is false . enableMetrics boolean Enable or disable OAuth metrics. Default value is false . httpRetries integer The maximum number of retries to attempt if an initial HTTP request fails. If not set, the default is to not attempt any retries. httpRetryPauseMs integer The pause to take before retrying a failed HTTP request. If not set, the default is to not pause at all but to immediately repeat a request. includeAcceptHeader boolean Whether the Accept header should be set in requests to the authorization servers. The default value is true . maxTokenExpirySeconds integer Set or limit time-to-live of the access tokens to the specified number of seconds. This should be set if the authorization server returns opaque tokens. passwordSecret PasswordSecretSource Reference to the Secret which holds the password. readTimeoutSeconds integer The read timeout in seconds when connecting to authorization server. If not set, the effective read timeout is 60 seconds. refreshToken GenericSecretSource Link to OpenShift Secret containing the refresh token which can be used to obtain access token from the authorization server. scope string OAuth scope to use when authenticating against the authorization server. Some authorization servers require this to be set. The possible values depend on how authorization server is configured. By default scope is not specified when doing the token endpoint request. tlsTrustedCertificates CertSecretSource array Trusted certificates for TLS connection to the OAuth server. tokenEndpointUri string Authorization server token endpoint URI. type string Must be oauth . username string Username used for the authentication.
[ "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token", "authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-public-client-id", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-confidential-client-id clientSecret: secretName: my-confidential-client-oauth-secret key: client-secret", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaClientAuthenticationOAuth-reference
Chapter 9. Changes in 3scale
Chapter 9. Changes in 3scale Note In 3scale 2.14, support for deploying databases within the cluster as part of the standard 3scale deployment was deprecated. In 3scale 2.16, Red Hat supports only self-deployed databases, whether within the cluster or external to it. This section lists current and future 3scale changes. Deprecated features 3scale support for OpenTracing is now deprecated in favor of OpenTelemetry. In the upcoming version, 3scale 2.16, OpenTracing will no longer be available. 3scale Toolbox Command Line Tool is deprecated and we will no longer incorporate further enhancements. The recommended way for provisioning and automation needs, is the 3scale Application Capabilities operator. OpenShift APIs for Data Protection is now the standard and recommended backup and restore mechanism in 3scale. The old backup and restore procedures are deprecated and will be removed from future versions documentation. 3scale no longer supports slaves for Redis configurations. If slaves are enabled on internal Redis configurations, the operator will override and disable them.
null
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/release_notes_for_red_hat_3scale_api_management_2.15_on-premises/changes_in_3scale
Chapter 10. Upgrading
Chapter 10. Upgrading The upgrade of the OpenShift sandboxed containers components consists of the following steps: Upgrade OpenShift Container Platform to update the Kata runtime and its dependencies. Upgrade the OpenShift sandboxed containers Operator to update the Operator subscription. You can upgrade OpenShift Container Platform before or after the OpenShift sandboxed containers Operator upgrade, with the one exception noted below. Always apply the KataConfig patch immediately after upgrading the OpenShift sandboxed containers Operator. 10.1. Upgrading resources Red Hat Enterprise Linux CoreOS (RHCOS) extensions deploy the OpenShift sandboxed containers resources onto the cluster. The RHCOS extension sandboxed containers contains the required components to run OpenShift sandboxed containers, such as the Kata containers runtime, the hypervisor QEMU, and other dependencies. You upgrade the extension by upgrading the cluster to a new release of OpenShift Container Platform. For more information about upgrading OpenShift Container Platform, see Updating Clusters . 10.2. Upgrading the Operator Use Operator Lifecycle Manager (OLM) to upgrade the OpenShift sandboxed containers Operator either manually or automatically. Selecting between manual or automatic upgrade during the initial deployment determines the future upgrade mode. For manual upgrades, the OpenShift Container Platform web console shows the available updates that the cluster administrator can install. For more information about upgrading the OpenShift sandboxed containers Operator in Operator Lifecycle Manager (OLM), see Updating installed Operators . 10.3. Updating the pod VM image For AWS, Azure, and IBM deployments, you must update the pod VM image. Upgrading the OpenShift sandboxed containers Operator when the enablePeerpods: paramter is true will not update the existing pod VM image automatically. To update the pod VM image after an upgrade you must delete and re-create the KataConfig CR. Note You can also check the peer pod config map for AWS and Azure deployments to ensure that the image ID is empty before re-creating the KataConfig CR. 10.3.1. Deleting the KataConfig custom resource You can delete the KataConfig custom resource (CR) by using the command line. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Delete the KataConfig CR by running the following command: USD oc delete kataconfig example-kataconfig Verify that the custom resource was deleted by running the following command: USD oc get kataconfig example-kataconfig Example output No example-kataconfig instances exist 10.3.2. Ensure peer pods CM image ID is empty When you delete the KataConfig CR, it should delete the peer pods CM image ID. For AWS and Azure deployments, check to ensure that the peer pods CM image ID is empty. Procedure Obtain the config map you created for the peer pods: USD oc get cm -n openshift-sandboxed-containers-operator peer-pods-cm -o jsonpath="{.data.AZURE_IMAGE_ID}" Use PODVM_AMI_ID for AWS. Use AZURE_IMAGE_ID for Azure. Check the status stanza of the YAML file. If the PODVM_AMI_ID parameter for AWS or the AZURE_IMAGE_ID parameter for Azure contains a value, set the value to "". If you have set the value to "", patch the peer pods config map: USD oc patch configmap peer-pods-cm -n openshift-sandboxed-containers-operator -p '{"data":{"AZURE_IMAGE_ID":""}}' Use PODVM_AMI_ID for AWS. Use AZURE_IMAGE_ID for Azure. 10.3.3. Creating the KataConfig custom resource You must create the KataConfig custom resource (CR) to install kata-remote as a runtime class on your worker nodes. Creating the KataConfig CR triggers the OpenShift sandboxed containers Operator to do the following: Create a RuntimeClass CR named kata-remote with a default configuration. This enables users to configure workloads to use kata-remote as the runtime by referencing the CR in the RuntimeClassName field. This CR also specifies the resource overhead for the runtime. OpenShift sandboxed containers installs kata-remote as a secondary, optional runtime on the cluster and not as the primary runtime. Important Creating the KataConfig CR automatically reboots the worker nodes. The reboot can take from 10 to more than 60 minutes. Factors that impede reboot time are as follows: A larger OpenShift Container Platform deployment with a greater number of worker nodes. Activation of the BIOS and Diagnostics utility. Deployment on a hard disk drive rather than an SSD. Deployment on physical nodes such as bare metal, rather than on virtual nodes. A slow CPU and network. Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Create an example-kataconfig.yaml manifest file according to the following example: apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true logLevel: info # kataConfigPoolSelector: # matchLabels: # <label_key>: '<label_value>' 1 Create the KataConfig CR by running the following command: USD oc apply -f example-kataconfig.yaml The new KataConfig CR is created and installs kata-remote as a runtime class on the worker nodes. Wait for the kata-remote installation to complete and the worker nodes to reboot before verifying the installation. Monitor the installation progress by running the following command: USD watch "oc describe kataconfig | sed -n /^Status:/,/^Events/p" When the status of all workers under kataNodes is installed and the condition InProgress is False without specifying a reason, the kata-remote is installed on the cluster. Verify the daemon set by running the following command: USD oc get -n openshift-sandboxed-containers-operator ds/peerpodconfig-ctrl-caa-daemon Verify the runtime classes by running the following command: USD oc get runtimeclass Example output NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m
[ "oc delete kataconfig example-kataconfig", "oc get kataconfig example-kataconfig", "No example-kataconfig instances exist", "oc get cm -n openshift-sandboxed-containers-operator peer-pods-cm -o jsonpath=\"{.data.AZURE_IMAGE_ID}\"", "oc patch configmap peer-pods-cm -n openshift-sandboxed-containers-operator -p '{\"data\":{\"AZURE_IMAGE_ID\":\"\"}}'", "apiVersion: kataconfiguration.openshift.io/v1 kind: KataConfig metadata: name: example-kataconfig spec: enablePeerPods: true logLevel: info kataConfigPoolSelector: matchLabels: <label_key>: '<label_value>' 1", "oc apply -f example-kataconfig.yaml", "watch \"oc describe kataconfig | sed -n /^Status:/,/^Events/p\"", "oc get -n openshift-sandboxed-containers-operator ds/peerpodconfig-ctrl-caa-daemon", "oc get runtimeclass", "NAME HANDLER AGE kata kata 152m kata-remote kata-remote 152m" ]
https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.8/html/user_guide/upgrading
CLI tools
CLI tools OpenShift Container Platform 4.13 Learning how to use the command-line tools for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/cli_tools/index
14.22.7. Getting Information about a Virtual Network
14.22.7. Getting Information about a Virtual Network This command returns basic information about the network object. To get the network information, run:
[ "virsh net-info network" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-virtual_networking_commands-getting_information_about_a_virtual_network
13.5. Changing the Index Sort Order
13.5. Changing the Index Sort Order By default, indexes are sorted alphabetically, in descending ASCII order. This is true for every attribute, even attributes which may have numeric attribute values like Integer or TelephoneNumber. It is possible to change the sort method by changing the matching rule set for the attribute. 13.5.1. Changing the Sort Order Using the Command Line To change the sort order using the command line, change the nsMatchingRule for the attribute index. For example:
[ "ldapmodify -D \"cn=Directory Manager\" -W -x dn: cn=sn,cn=index,cn=Example1,cn=ldbm database,cn=plugins,cn=config changetype:modify replace:nsMatchingRule nsMatchingRule: integerOrderingMatch" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/index-sort-order
Chapter 126. Velocity
Chapter 126. Velocity Since Camel 1.2 Only producer is supported The Velocity component allows you to process a message using an Apache Velocity template. This can be ideal when using a template to generate responses for requests. 126.1. Dependencies When using velocity with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-velocity-starter</artifactId> </dependency> 126.2. URI format Where templateName is the classpath-local URI of the template to invoke; or the complete URL of the remote template (for example, file://folder/myfile.vm ). 126.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 126.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 126.3.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 126.4. Component Options The Velocity component supports 5 options, which are listed below. Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean velocityEngine (advanced) To use the VelocityEngine otherwise a new engine is created. VelocityEngine 126.5. Endpoint Options The Velocity endpoint is configured using URI syntax: with the following path and query parameters: 126.5.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the resource. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 126.5.2. Query Parameters (7 parameters) Name Description Default Type allowContextMapAll (producer) Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false boolean allowTemplateFromHeader (producer) Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false boolean contentCache (producer) Sets whether to use resource content cache or not. false boolean encoding (producer) Character encoding of the resource content. String loaderCache (producer) Enables / disables the velocity resource loader cache which is enabled by default. true boolean propertiesFile (producer) The URI of the properties file which is used for VelocityEngine initialization. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 126.6. Message Headers The Velocity component supports 4 message header(s), which is/are listed below: Name Description Default Type CamelVelocityResourceUri (producer) Constant: VELOCITY_RESOURCE_URI The name of the velocity template. String CamelVelocityTemplate (producer) Constant: VELOCITY_TEMPLATE The content of the velocity template. String CamelVelocityContext (producer) Constant: VELOCITY_CONTEXT The velocity context to use. Context CamelVelocitySupplementalContext (producer) Constant: VELOCITY_SUPPLEMENTAL_CONTEXT To add additional information to the used VelocityContext. The value of this header should be a Map with key/values that will added (override any existing key with the same name). This can be used to pre setup some common key/values you want to reuse in your velocity endpoints. Map Headers set during the Velocity evaluation are returned to the message and added as headers. Then it is possible to return values from Velocity to the Message. For example, to set the header value of fruit in the Velocity template .tm : The fruit header is now accessible from the message.out.headers . 126.7. Velocity Context Camel will provide exchange information in the Velocity context (just a Map ). The Exchange is transfered as: key value exchange The Exchange itself. exchange.properties The Exchange properties. headers The headers of the In message. camelContext The Camel Context instance. request The In message. in The In message. body The In message body. out The Out message (only for InOut message exchange pattern). response The Out message (only for InOut message exchange pattern). You can setup a custom Velocity Context yourself by setting property allowTemplateFromHeader=true and setting the message header CamelVelocityContext just like this VelocityContext velocityContext = new VelocityContext(variableMap); exchange.getIn().setHeader("CamelVelocityContext", velocityContext); 126.8. Hot reloading The Velocity template resource is, by default, hot reloadable for both file and classpath resources (expanded jar). If you set contentCache=true , Camel will only load the resource once, and thus hot reloading is not possible. This scenario can be used in production, when the resource never changes. 126.9. Dynamic templates Since Camel 2.1 Camel provides two headers by which you can define a different resource location for a template or the template content itself. If any of these headers is set then Camel uses this over the endpoint configured resource. This allows you to provide a dynamic template at runtime. Header Type Description CamelVelocityResourceUri String A URI for the template resource to use instead of the endpoint configured. CamelVelocityTemplate String The template to use instead of the endpoint configured. 126.10. Samples For example, you can use: from("activemq:My.Queue"). to("velocity:com/acme/MyResponse.vm"); To use a Velocity template to formulate a response to a message for InOut message exchanges (where there is a JMSReplyTo header). If you want to use InOnly and consume the message and send it to another destination, you could use the following route: from("activemq:My.Queue"). to("velocity:com/acme/MyResponse.vm"). to("activemq:Another.Queue"); And to use the content cache, for example, for use in production, where the .vm template never changes: from("activemq:My.Queue"). to("velocity:com/acme/MyResponse.vm?contentCache=true"). to("activemq:Another.Queue"); And a file based resource: from("activemq:My.Queue"). to("velocity:file://myfolder/MyResponse.vm?contentCache=true"). to("activemq:Another.Queue"); It is possible to specify which template the component should use dynamically via a header, for example: from("direct:in"). setHeader("CamelVelocityResourceUri").constant("path/to/my/template.vm"). to("velocity:dummy?allowTemplateFromHeader=true""); It is possible to specify a template directly as a header the component should use dynamically via a header, so for example: from("direct:in"). setHeader("CamelVelocityTemplate").constant("Hi this is a velocity template that can do templating USD{body}"). to("velocity:dummy?allowTemplateFromHeader=true""); 126.11. The Email Sample In this sample, to use the Velocity templating for an order confirmation email. The email template is laid out in Velocity as: letter.vm And the java code (from an unit test): private Exchange createLetter() { Exchange exchange = context.getEndpoint("direct:a").createExchange(); Message msg = exchange.getIn(); msg.setHeader("firstName", "Claus"); msg.setHeader("lastName", "Ibsen"); msg.setHeader("item", "Camel in Action"); msg.setBody("PS: beer is on me, James"); return exchange; } @Test public void testVelocityLetter() throws Exception { MockEndpoint mock = getMockEndpoint("mock:result"); mock.expectedMessageCount(1); mock.message(0).body(String.class).contains("Thanks for the order of Camel in Action"); template.send("direct:a", createLetter()); mock.assertIsSatisfied(); } @Override protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from("direct:a") .to("velocity:org/apache/camel/component/velocity/letter.vm") .to("mock:result"); } }; } 126.12. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.velocity.allow-context-map-all Sets whether the context map should allow access to all details. By default only the message body and headers can be accessed. This option can be enabled for full access to the current Exchange and CamelContext. Doing so impose a potential security risk as this opens access to the full power of CamelContext API. false Boolean camel.component.velocity.allow-template-from-header Whether to allow to use resource template from header or not (default false). Enabling this allows to specify dynamic templates via message header. However this can be seen as a potential security vulnerability if the header is coming from a malicious user, so use this with care. false Boolean camel.component.velocity.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.velocity.enabled Whether to enable auto configuration of the velocity component. This is enabled by default. Boolean camel.component.velocity.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.velocity.velocity-engine To use the VelocityEngine otherwise a new engine is created. The option is a org.apache.velocity.app.VelocityEngine type. VelocityEngine
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-velocity-starter</artifactId> </dependency>", "velocity:templateName[?options]", "velocity:resourceUri", "USDin.setHeader(\"fruit\", \"Apple\")", "VelocityContext velocityContext = new VelocityContext(variableMap); exchange.getIn().setHeader(\"CamelVelocityContext\", velocityContext);", "from(\"activemq:My.Queue\"). to(\"velocity:com/acme/MyResponse.vm\");", "from(\"activemq:My.Queue\"). to(\"velocity:com/acme/MyResponse.vm\"). to(\"activemq:Another.Queue\");", "from(\"activemq:My.Queue\"). to(\"velocity:com/acme/MyResponse.vm?contentCache=true\"). to(\"activemq:Another.Queue\");", "from(\"activemq:My.Queue\"). to(\"velocity:file://myfolder/MyResponse.vm?contentCache=true\"). to(\"activemq:Another.Queue\");", "from(\"direct:in\"). setHeader(\"CamelVelocityResourceUri\").constant(\"path/to/my/template.vm\"). to(\"velocity:dummy?allowTemplateFromHeader=true\"\");", "from(\"direct:in\"). setHeader(\"CamelVelocityTemplate\").constant(\"Hi this is a velocity template that can do templating USD{body}\"). to(\"velocity:dummy?allowTemplateFromHeader=true\"\");", "Dear USD{headers.lastName}, USD{headers.firstName} Thanks for the order of USD{headers.item}. Regards Camel Riders Bookstore USD{body}", "private Exchange createLetter() { Exchange exchange = context.getEndpoint(\"direct:a\").createExchange(); Message msg = exchange.getIn(); msg.setHeader(\"firstName\", \"Claus\"); msg.setHeader(\"lastName\", \"Ibsen\"); msg.setHeader(\"item\", \"Camel in Action\"); msg.setBody(\"PS: Next beer is on me, James\"); return exchange; } @Test public void testVelocityLetter() throws Exception { MockEndpoint mock = getMockEndpoint(\"mock:result\"); mock.expectedMessageCount(1); mock.message(0).body(String.class).contains(\"Thanks for the order of Camel in Action\"); template.send(\"direct:a\", createLetter()); mock.assertIsSatisfied(); } @Override protected RouteBuilder createRouteBuilder() { return new RouteBuilder() { public void configure() { from(\"direct:a\") .to(\"velocity:org/apache/camel/component/velocity/letter.vm\") .to(\"mock:result\"); } }; }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-velocity-component-starter
Chapter 28. Installing into a Disk Image
Chapter 28. Installing into a Disk Image This chapter describes the process of creating custom, bootable images of several different types, and other related topics. The image creation and installation process can be either performed manually in a procedure similar to a normal hard drive installation, or it can be automated using a Kickstart file and the livemedia-creator tool. Note Creating custom images using livemedia-creator is currently supported only on AMD64 and Intel 64 (x86_64) and IBM POWER (big endian) systems. Additionally, Red Hat only supports creating custom images of Red Hat Enterprise Linux 7. If you choose the manual approach, you will be able to perform the installation interactively, using the graphical installation program. The process is similar to installing using Red Hat Enterprise Linux bootable media and the graphical installation program; however, before you begin the installation, you must create one or more empty image files manually. Automated disk image installations using livemedia-creator are somewhat similar to Kickstart installations with network boot. To use this approach, you must prepare a valid Kickstart file, which will be used by livemedia-creator to perform the installation. The disk image file will be created automatically. Both approaches to disk image installations require a separate installation source. In most cases, the best approach is to use an ISO image of the binary Red Hat Enterprise Linux DVD. See Chapter 2, Downloading Red Hat Enterprise Linux for information about obtaining installation ISO images. Important It is not currently possible to use an installation ISO image of Red Hat Enterprise Linux without any additional preparation. The installation source for a disk image installation must be prepared the same way it would be prepared when performing a normal installation. See Section 3.3, "Preparing Installation Sources" for information about preparing installation sources. 28.1. Manual Disk Image Installation A manual installation into a disk image is performed by executing the Anaconda installation program on an existing system and specifying one or more disk image files as installation targets. Additional options can also be used to configure Anaconda further. A list of available options can be obtained by using the anaconda -h command. Warning Image installation using Anaconda is potentially dangerous, because it uses the installation program on an already installed system. While no bugs are known at this moment which could cause any problems, it is possible that this process could render the entire system unusable. Installation into disk images should always be performed on systems or virtual machines specifically reserved for this purpose, and not on systems containing any valuable data. This section provides information about creating empty disk images and using the Anaconda installation program to install Red Hat Enterprise Linux into these images. 28.1.1. Preparing a Disk Image The first step in manual disk image installation is creating one or more image files, which will later be used as installation targets similar to physical storage devices. On Red Hat Enterprise Linux, a disk image file can be created using the following command: Replace size with a value representing the size of the image (such as 10G or 5000M ), and name with the file name of the image to be created. For example, to create a disk image file named myimage.raw with the size of 30GB, use the following command: Note The fallocate command allows you to specify the size of the file to be created in different ways, depending on the suffix used. For details about specifying the size, see the fallocate(1) man page. The size of the disk image file you create will limit the maximum capacity of file systems created during the installation. The image must always have a minimum size of 3GB, but in most cases, the space requirements will be larger. The exact size you will need for your installation will vary depending on the software you want to install, the amount of swap space, and the amount of space you will need to be available after the installation. More details about partitioning are available in: Section 8.14.4.4, "Recommended Partitioning Scheme" for 64-bit AMD, Intel, and ARM systems Section 13.15.4.4, "Recommended Partitioning Scheme" for IBM Power Systems servers After you create one or more empty disk image files, continue with Section 28.1.2, "Installing Red Hat Enterprise Linux into a Disk Image" . 28.1.2. Installing Red Hat Enterprise Linux into a Disk Image Important Set Security Enhanced Linux ( SELinux ) to permissive (or disabled) mode before creating custom images with Anaconda . See Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide for information on setting SELinux modes. To start the installation into a disk image file, execute the following command as root : Replace /path/to/image/file with the full path to the image file you created earlier. After executing this command, Anaconda will start on your system. The installation interface will be the same as if you performed the installation normally (booting the system from Red Hat Enterprise Linux media), but the graphical installation will start directly, skipping the boot menu. This means that boot options must be specified as additional arguments to the anaconda command. You can view the full list of supported commands by executing anaconda -h on a command line. One of the most important options is --repo= , which allows you to specify an installation source. This option uses the same syntax as the inst.repo= boot option. See Section 23.1, "Configuring the Installation System at the Boot Menu" for more information. When you use the --image= option, only the disk image file specified will be available as the installation target. No other devices will be visible in the Installation Destination dialog. If you want to use multiple disk images, you must specify the --image= option separately for each image file separately. For example: The above command will start Anaconda , and in the Installation Destination screen, both image files specified will be available as installation targets. Optionally, you can also assign custom names to the disk image files used in the installation. To assign a name to a disk image file, append : name to the end of the disk image file name. For example, to use a disk image file located in /home/testuser/diskinstall/image1.raw and assign the name myimage to it, execute the following command:
[ "fallocate -l size name", "fallocate -l 30G myimage.raw", "anaconda --image= /path/to/image/file", "anaconda --image=/home/testuser/diskinstall/image1.raw --image=/home/testuser/diskinstall/image2.raw", "anaconda --image=/home/testuser/diskinstall/image1.raw:myimage" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-disk-image-installation
Chapter 6. Next steps
Chapter 6. steps After you have installed AMQ Broker and created a standalone broker with the default configuration settings, you can configure it to meet your messaging requirements, connect messaging client applications to it, and monitor and manage it. Additional resources are available to help you complete these goals. Configuring the broker Use Configuring AMQ Broker to configure the broker to meet your requirements. You can configure: The broker to accept client connections The address space (including point-to-point and publish-subscribe messaging) Message persistence Broker resource consumption (including resource limits, message paging, and large message support) Duplicate message detection Logging Securing the broker Use Configuring AMQ Broker to implement any of the following methods to secure the broker: Guest/anonymous access control Basic user and password access control Certificate-based access control LDAP integration Kerberos integration Setting up clustering and high availability Use Configuring AMQ Broker to add additional brokers to form a broker cluster and increase messaging throughput. You can also configure high availability to increase messaging reliability. Creating messaging client applications Use AMQ Clients Overview to learn about AMQ Clients and how it can help you to create messaging client applications that connect to the broker and produce and consume messages. Monitoring and managing the broker Use Managing AMQ Broker to monitor and manage your broker (or brokers) once they are running.
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/getting_started_with_amq_broker/next-steps-getting-started
Part III. Installing and configuring Red Hat Process Automation Manager in a Red Hat JBoss EAP clustered environment
Part III. Installing and configuring Red Hat Process Automation Manager in a Red Hat JBoss EAP clustered environment As a system engineer, you can create a Red Hat Process Automation Manager clustered environment to provide high availability and load balancing for your development and runtime environments. Prerequisites You have reviewed the information in Planning a Red Hat Process Automation Manager installation .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/assembly-clustering-eap
Chapter 3. Simplify your JBoss EAP 8.0 migration with effective tools
Chapter 3. Simplify your JBoss EAP 8.0 migration with effective tools As a system administrator, you can simplify your migration process to JBoss EAP 8.0 with the help of two essential tools. The Migration Toolkit for Runtimes (MTR) analyzes your applications and provides detailed migration reports, whereas the JBoss Server Migration Tool updates your server configuration to include new features and settings. 3.1. Analyzing your applications before migration You can use Migration Toolkit for Runtimes (MTR) to analyze the code and architecture of your JBoss EAP 6.4 and 7 applications before you migrate them to JBoss EAP 8.0. The MTR rule set for migration to JBoss EAP 8.0 provides reports on XML descriptors, specific application code, and parameters that need to be replaced by an alternative configuration when migrating to JBoss EAP 8.0. MTR is an extensible and customizable rule-based set of tools that helps simplify migration of Java applications. MTR analyzes the APIs, technologies, and architectures used by the applications you plan to migrate, providing detailed migration reports for each application. These reports provide the following information: Detailed explanations of the necessary migration changes Whether the reported change is mandatory or optional Whether the reported change is complex or trivial Links to the code requiring the migration change Hints and links to information about how to make the required changes An estimate of the level of effort for each migration issue found and the total estimated effort to migrate the application Additional resources Migration Toolkit for Runtimes 3.2. Simplify your server configuration migration The JBoss Server Migration Tool is the preferred method for updating your server configuration to include the new features and settings in JBoss EAP 8.0 while keeping your existing configuration. The JBoss Server Migration Tool reads your existing JBoss EAP server configuration files and adds configurations for any new subsystems, updates the existing subsystem configurations with new features, and removes any obsolete subsystem configurations. You can use the JBoss Server Migration Tool to migrate standalone servers and manage domains. 3.2.1. Migrating to JBoss EAP 8.0 The JBoss Server Migration Tool supports migration from all releases of JBoss EAP version 7, to JBoss EAP 8.0. Note If you want to migrate from JBoss EAP 6.4, you must first migrate to the latest Cumulative Patch (CP) version of JBoss EAP 7.4. For more information, see JBoss EAP 7.4 Migration Guide . Subsequently, you can migrate from JBoss EAP 7.4 CP version to JBoss EAP 8.0. Prerequisites JBoss EAP is not running. Procedure Download the tool from the JBoss EAP download page . Extract the downloaded archive. Navigate to the MIGRATION_TOOL_HOME/bin directory. Execute the jboss-server-migration script. For Red Hat Enterprise Linux: For Microsoft Windows: Note Replace EAP_PREVIOUS_HOME and EAP_NEW_HOME with the actual paths to the and new installations of JBoss EAP. Additional resources Using the JBoss Server Migration Tool
[ "unzip <NAME_OF_THE_FILE>", "./jboss-server-migration.sh --source EAP_PREVIOUS_HOME --target EAP_NEW_HOME", "jboss-server-migration.bat --source EAP_PREVIOUS_HOME --target EAP_NEW_HOME" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/migration_guide/simplify-your-jboss-eap-8-migration-with-effective-tools_default
Chapter 4. Updating Red Hat build of OpenJDK 8 on RHEL
Chapter 4. Updating Red Hat build of OpenJDK 8 on RHEL The following sections provide instructions for updating Red Hat build of OpenJDK 8 on RHEL. 4.1. Updating Red Hat build of OpenJDK 8 on RHEL by using yum The installed Red Hat build of OpenJDK packages can be updated using the yum system package manager. Prerequisites You must have root privileges on the system. Procedure Check the current Red Hat build of OpenJDK version: A list of installed Red Hat build of OpenJDK packages displays. Update a specific package. For example: Verify that the update worked by checking the current Red Hat build of OpenJDK versions: Note If the output from the command shows that you have a different major version of Red Hat build of OpenJDK checked out on your system, you can enter the following command in your CLI to switch your system to use Red Hat build of OpenJDK 8: 4.2. Updating Red Hat build of OpenJDK 8 on RHEL by using an archive You can update Red Hat build of OpenJDK using the archive. This is useful if the Red Hat build of OpenJDK administrator does not have root privileges. Prerequisites Know the generic path pointing to your JDK or JRE installation. For example, ~/jdks/java-8 Procedure Remove the existing symbolic link of the generic path to your JDK or JRE. For example: Install the latest version of the JDK or JRE in your installation location. Additional resources For instructions on installing a JRE, see Installing a JRE on RHEL using an archive . For instructions on installing a JDK, see Installing Red Hat build of OpenJDK on RHEL using an archive . Revised on 2024-05-10 09:08:29 UTC
[ "sudo yum list installed \"java*\"", "Installed Packages java-1.8.0-openjdk.x86_64 1:1.8.0.322.b06-2.el8_5 @rhel-8-for-x86_64-appstream-rpms java-11-openjdk.x86_64 1:11.0.14.0.9-2.el8_5 @rhel-8-for-x86_64-appstream-rpms java-17-openjdk.x86_64 1:17.0.2.0.8-4.el8_5 @rhel-8-for-x86_64-appstream-rpms", "sudo yum update java-1.8.0-openjdk", "java -version openjdk version \"1.8.0_322\" OpenJDK Runtime Environment (build 1.8.0_322-b06) OpenJDK 64-Bit Server VM (build 25.322-b06, mixed mode)", "sudo update-alternatives --config 'java'", "unlink ~/jdks/java-8" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/installing_and_using_red_hat_build_of_openjdk_8_for_rhel/updating-openjdk11-on-rhel8_openjdk
Chapter 17. Storage
Chapter 17. Storage New kernel subsystem: libnvdimm This update adds libnvdimm , a kernel subsystem responsible for the detection, configuration, and management of Non-Volatile Dual Inline Memory Modules (NVDIMMs). As a result, if NVDIMMs are present in the system, they are exposed through the /dev/pmem* device nodes and can be configured using the ndctl utility. (BZ#1269626) Hardware with NVDIMM support At the time of the Red Hat Enterprise Linux 7.3 release, a number of original equipment manufacturers (OEMs) are in the process of adding support for Non-Volatile Dual Inline Memory Module (NVDIMM) hardware. As these products are introduced in the market, Red Hat will work with these OEMs to test these configurations and, if possible, announce support for them on Red Hat Enterprise Linux 7.3. Since this is a new technology, a specific support statement will be issued for each product and supported configuration. This will be done after successful Red Hat testing, and corresponding documented support by the OEM. The currently supported NVDIMM products are: HPE NVDIMM on HPE ProLiant systems. For specific configurations, see Hewlett Packard Enterprise Company support statements. NVDIMM products and configurations that are not on this list are not supported. The Red Hat Enterprise Linux 7.3 Release Notes will be updated as NVDIMM products are added to the list of supported products. (BZ#1389121) New packages: nvml The nvml packages contain the Non-Volatile Memory Library (NVML), a collection of libraries for using memory-mapped persistence, optimized specifically for persistent memory. (BZ#1274541) SCSI now supports multiple hardware queues The nr_hw_queues field is now present in the Scsi_Host structure, which allows drivers to use the field. (BZ#1308703) The exclusive_pref_bit optional argument has been added to the multipath ALUA prioritizer If the exclusive_pref_bit argument is added to the multipath Asymmetric Logical Unit Access (ALUA) prioritizer, and a path has the Target Port Group Support (TPGS) pref bit set, multipath makes a path group using only that path and assigns the highest priority to the path. Users can now either allow the preferred path to be in a path group with other paths that are equally optimized, which is the default option, or in a path group by itself by adding the exclusive_pref_bit argument. (BZ# 1299652 ) multipathd now supports raw format mode in multipathd formatted output commands The multipathd formatted output commands now offer raw format mode, which removes the headers and additional padding between fields. Support for additional format wildcards has been added as well. Raw format mode makes it easier to collect and parse information about multipath devices, particularly for use in scripting. (BZ# 1299651 ) Improved LVM locking infrastructure lvmlockd is a generation locking infrastucture for LVM. It allows LVM to safely manage shared storage from multiple hosts, using either the dlm or sanlock lock managers. sanlock allows lvmlockd to coordinate hosts through storage-based locking, without the need for an entire cluster infrastructure. For more information, see the lvmlockd(8) man page. This feature was originally introduced in Red Hat Enterprise Linux 7.2 as a Technology Preview. In Red Hat Enterprise Linux 7.3, lvmlockd is fully supported. (BZ# 1299977 ) Support for caching thinly-provisioned logical volumes with limitations Red Hat Enterprise Linux 7.3 provides the ability to cache thinly provisioned logical volumes. This brings caching benefits to all the thin logical volumes associated with a particular thin pool. However, when thin pools are set up in this way, it is not currently possible to grow the thin pool without removing the cache layer first. This also means that thin pool auto-grow features are unavailable. Users should take care to monitor the fullness and consumption rate of their thin pools to avoid running out of space. Refer to the lvmthin(7) man page for information on thinly-provisioned logical volume and the lvmcache(7) man page for information on LVM cache volumes. (BZ# 1371597 ) device-mapper-persistent-data rebased to version 0.6.2 The device-mapper-persistent-data packages have been upgraded to upstream version 0.6.2, which provides a number of bug fixes and enhancements over the version. Notably, the thin_ls tool, which can provide information about thin volumes in a pool, is now available. (BZ# 1315452 ) Support for DIF/DIX (T10 PI) on specified hardware SCSI T10 DIF/DIX is fully supported in Red Hat Enterprise Linux 7.3, provided that the hardware vendor has qualified it and provides full support for the particular HBA and storage array configuration. DIF/DIX is not supported on other configurations, it is not supported for use on the boot device, and it is not supported on virtualized guests. At the current time, the following vendors are known to provide this support. FUJITSU supports DIF and DIX on: EMULEX 16G FC HBA: EMULEX LPe16000/LPe16002, 10.2.254.0 BIOS, 10.4.255.23 FW, with: FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3, AF250, AF650 QLOGIC 16G FC HBA: QLOGIC QLE2670/QLE2672, 3.28 BIOS, 8.00.00 FW, with: FUJITSU ETERNUS DX100 S3, DX200 S3, DX500 S3, DX600 S3, DX8100 S3, DX8700 S3, DX8900 S3, DX200F, DX60 S3 Note that T10 DIX requires database or some other software that provides generation and verification of checksums on disk blocks. No currently supported Linux file systems have this capability. EMC supports DIF on: EMULEX 8G FC HBA: LPe12000-E and LPe12002-E with firmware 2.01a10 or later, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later EMULEX 16G FC HBA: LPe16000B-E and LPe16002B-E with firmware 10.0.803.25 or later, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later QLOGIC 16G FC HBA: QLE2670-E-SP and QLE2672-E-SP, with: EMC VMAX3 Series with Enginuity 5977; EMC Symmetrix VMAX Series with Enginuity 5876.82.57 and later Please refer to the hardware vendor's support information for the latest status. Support for DIF/DIX remains in Technology Preview for other HBAs and storage arrays. (BZ#1379689) iprutils rebased to version 2.4.13 The iprutils packages have been upgraded to upstream version 2.4.13, which provides a number of bug fixes and enhancements over the version. Notably, this update adds support for enabling an adapter write cache on 8247-22L and 8247-21L base Serial Attached SCSI (SAS) backplanes to provide significant performance improvements. (BZ#1274367) The multipathd command can now display the multipath data with JSON formatting With this release, multipathd now includes the show maps json command to display the multipath data with JSON formatting. This makes it easier for other programs to parse the multipathd show maps output. (BZ# 1353357 ) Default configuration added for Huawei XSG1 arrays With this release, multipath provides a default configuration for Huawei XSG1 arrays. (BZ#1333331) Multipath now includes support for Ceph RADOS block devices. RDB devices need special uid handling and their own checker function with the ability to repair devices. With this release, it is now possible to run multipath on top of RADOS block devices. Note, however, that the multipath RBD support should be used only when an RBD image with the exclusive-lock feature enabled is being shared between multiple clients. (BZ# 1348372 ) Support added for PURE FlashArray With this release, multipath has added built-in configuration support for the PURE FlashArray (BZ# 1300415 ) Default configuration added for the MSA 2040 array With this release, multipath provides a default configuration for the MSA 2040 array. (BZ#1341748) New skip_kpartx configuration option to allow skipping kpartx partition creation The skip_kpartx option has been added to the defaults, devices, and multipaths sections of the multipath.conf file. When this option is set to yes , multipath devices that are configured with skip_kpartx will not have any partition devices created for them. This allows users to create a multipath device without creating partitions, even if the device has a partition table. The default value of this option is no . (BZ# 1311659 ) Multipaths weightedpath prioritizer now supports a wwn keyword The multipaths weightedpath prioritizer now supports a wwn keyword. If this is used, the regular expression for matching the device is of the form host_wwnn:host_wwpn:target_wwnn:target_wwpn . These identifiers can either be looked up through sysfs or using the following multipathd show paths format wildcards: %N:%R:%n:%r . The weightedpath prioritizer previously only allowed HBTL and device nam regex matching. Neither of these are persistent across reboots, so the weightedpath prioritizer arguments needed to be changed after every boot. This feature provides a way to use the weightedpath prioritizer with persistent device identifiers. (BZ#1297456) New packages: nvme-cli The nvme-cli packages provide the Non-Volatile Memory Express (NVMe) command-line interface to manage and configure NVMe controllers. (BZ#1344730) LVM2 now displays a warning message when autoresize is not configured The thin pool default behavior is not to autoresize the thin pool when the space is going to be exhausted. Exhausting the space can have various negative consequences. When the user is not using autoresize and the thin pool becomes full, a new warning message notifies the user about possible problems so that they can take appropriate actions, such as resize the thin pool, or stop using the thin volume. (BZ# 1189221 ) dmstats now supports mapping of files to dmstats regions The --filemap option of the dmstats command now allows the user to easily configure dmstats regions to track I/O operations to a specified file in the file system. Previously, I/O statistics were only available for a whole device, or a region of a device, which limited administrator insight into I/O performance to a per-file basis. Now, the --filemap option enables the user to inspect file I/O performance using the same tools used for any device-mapper device. (BZ# 1286285 ) LVM no longer applies LV polices on external volumes Previously, LVM disruptively applied its own policy for LVM thin logical volumes (LVs) on external volumes as well, which could result in unexpected behavior. With this update, external users of thin pool can use their own management of external thin volumes, and LVM no longer applies LV polices on such volumes. (BZ# 1329235 ) The thin pool is now always checked for sufficient space when creating a new thin volume Even when the user does not use autoresize with thin pool monitoring, the thin pool is now always checked for sufficient space when creating a new thin volume. A new thin volumes now cannot be created in the following situations: The thin-pool has reached 100% of the data volume capacity. There is less than 25% of thin pool metadata free space for metadata smaller than 16 MiB. There is less than 4 MiB of free space in metadata. (BZ# 1348336 ) LVM can now set the maximum number of cache pool chunks The new LVM allocation parameter in the allocation section of the lvm.conf file, cache_pool_max_chunks , limits the maximum number of cache pool chunks. When this parameter is undefined or set to 0, the built-in defaults are used. (BZ# 1364244 ) Support for ability to uncouple a cache pool from a logical volume LVM now has the ability to uncouple a cache pool from a logical volume if a device in the cache pool has failed. Previously, this type of failure would require manual intervention and complicated alterations to LVM metadata in order to separate the cache pool from the origin logical volume. To uncouple a logical volume from its cache-pool use the following command: Note the following limitations: The cache logical volume must be inactive (may require a reboot) A writeback cache requires the --force option due to the possibility of abandoning data lost to failure. (BZ# 1131777 ) LVM can now track and display thin snapshot logical volumes that have been removed You can now configure your system to track thin snapshot logical volumes that have been removed by enabling the record_lvs_history metadata option in the lvm.conf configuration file. This allows you to display a full thin snapshot dependency chain that includes logical volumes that have been removed from the original dependency chain and have become historical logical volumes. The full dependency chain, including historical LVs, can be displayed with new lv_full_ancestors and lv_full_descendants reporting fields. For information on configuring and displaying historical logical volumes, see Logical Volume Administration . (BZ# 1240549 )
[ "lvconvert --uncache *vg*/*lv*" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/new_features_storage
Chapter 2. Creating C or C++ Applications
Chapter 2. Creating C or C++ Applications Red Hat offers multiple tools for creating applications using the C and C++ languages. This part of the book lists some of the most common development tasks. 2.1. GCC in RHEL 9 Red Hat Enterprise Linux 9 is distributed with GCC 11 as the standard compiler. The default language standard setting for GCC 11 is C++17. This is equivalent to explicitly using the command-line option -std=gnu++17 . Later language standards, such as C++20 and so on, and library features introduced in these later language standards are still considered experimental. Additional resources Porting to GCC 11 Porting your code to C++17 with GCC 11 2.2. Building Code with GCC Learn about situations where source code must be transformed into executable code. 2.2.1. Relationship between code forms Prerequisites Understanding the concepts of compiling and linking Possible code forms The C and C++ languages have three forms of code: Source code written in the C or C++ language, present as plain text files. The files typically use extensions such as .c , .cc , .cpp , .h , .hpp , .i , .inc . For a complete list of supported extensions and their interpretation, see the gcc manual pages: Object code , created by compiling the source code with a compiler . This is an intermediate form. The object code files use the .o extension. Executable code , created by linking object code with a linker . Linux application executable files do not use any file name extension. Shared object (library) executable files use the .so file name extension. Note Library archive files for static linking also exist. This is a variant of object code that uses the .a file name extension. Static linking is not recommended. See Section 2.3.2, "Static and dynamic linking" . Handling of code forms in GCC Producing executable code from source code is performed in two steps, which require different applications or tools. GCC can be used as an intelligent driver for both compilers and linkers. This allows you to use a single gcc command for any of the required actions (compiling and linking). GCC automatically selects the actions and their sequence: Source files are compiled to object files. Object files and libraries are linked (including the previously compiled sources). It is possible to run GCC so that it performs only compiling, only linking, or both compiling and linking in a single step. This is determined by the types of inputs and requested type of output(s). Because larger projects require a build system which usually runs GCC separately for each action, it is better to always consider compilation and linking as two distinct actions, even if GCC can perform both at once. 2.2.2. Compiling source files to object code To create object code files from source files and not an executable file immediately, GCC must be instructed to create only object code files as its output. This action represents the basic operation of the build process for larger projects. Prerequisites C or C++ source code file(s) GCC installed on the system Procedure Change to the directory containing the source code file(s). Run gcc with the -c option: Object files are created, with their file names reflecting the original source code files: source.c results in source.o . Note With C++ source code, replace the gcc command with g++ for convenient handling of C++ Standard Library dependencies. 2.2.3. Enabling debugging of C and C++ applications with GCC Because debugging information is large, it is not included in executable files by default. To enable debugging of your C and C++ applications with it, you must explicitly instruct the compiler to create it. To enable creation of debugging information with GCC when compiling and linking code, use the -g option: Optimizations performed by the compiler and linker can result in executable code which is hard to relate to the original source code: variables may be optimized out, loops unrolled, operations merged into the surrounding ones, and so on. This affects debugging negatively. For improved debugging experience, consider setting the optimization with the -Og option. However, changing the optimization level changes the executable code and may change the actual behaviour including removing some bugs. To also include macro definitions in the debug information, use the -g3 option instead of -g . The -fcompare-debug GCC option tests code compiled by GCC with debug information and without debug information. The test passes if the resulting two binary files are identical. This test ensures that executable code is not affected by any debugging options, which further ensures that there are no hidden bugs in the debug code. Note that using the -fcompare-debug option significantly increases compilation time. See the GCC manual page for details about this option. Additional resources Using the GNU Compiler Collection (GCC) - Options for Debugging Your Program Debugging with GDB - Debugging Information in Separate Files The GCC manual page: 2.2.4. Code optimization with GCC A single program can be transformed into more than one sequence of machine instructions. You can achieve a more optimal result if you allocate more resources to analyzing the code during compilation. With GCC, you can set the optimization level using the -O level option. This option accepts a set of values in place of the level . Level Description 0 Optimize for compilation speed - no code optimization (default). 1 , 2 , 3 Optimize to increase code execution speed (the larger the number, the greater the speed). s Optimize for file size. fast Same as a level 3 setting, plus fast disregards strict standards compliance to allow for additional optimizations. g Optimize for debugging experience. For release builds, use the optimization option -O2 . During development, the -Og option is useful for debugging the program or library in some situations. Because some bugs manifest only with certain optimization levels, test the program or library with the release optimization level. GCC offers a large number of options to enable individual optimizations. For more information, see the following Additional resources. Additional resources Using GNU Compiler Collection - Options That Control Optimization Linux manual page for GCC: 2.2.5. Options for hardening code with GCC When the compiler transforms source code to object code, it can add various checks to prevent commonly exploited situations and increase security. Choosing the right set of compiler options can help produce more secure programs and libraries, without having to change the source code. Release version options The following list of options is the recommended minimum for developers targeting Red Hat Enterprise Linux: For programs, add the -fPIE and -pie Position Independent Executable options. For dynamically linked libraries, the mandatory -fPIC (Position Independent Code) option indirectly increases security. Development options Use the following options to detect security flaws during development. Use these options in conjunction with the options for the release version: Additional resources Defensive Coding Guide Memory Error Detection Using GCC - Red Hat Developers Blog post 2.2.6. Linking code to create executable files Linking is the final step when building a C or C++ application. Linking combines all object files and libraries into an executable file. Prerequisites One or more object file(s) GCC must be installed on the system Procedure Change to the directory containing the object code file(s). Run gcc : An executable file named executable-file is created from the supplied object files and libraries. To link additional libraries, add the required options after the list of object files. Note With C++ source code, replace the gcc command with g++ for convenient handling of C++ Standard Library dependencies. 2.2.7. Example: Building a C program with GCC (compiling and linking in one step) This example shows the exact steps to build a simple sample C program. In this example, compiling and linking the code is done in one step. Prerequisites You must understand how to use GCC. Procedure Create a directory hello-c and change to it: Create file hello.c with the following contents: #include <stdio.h> int main() { printf("Hello, World!\n"); return 0; } Compile and link the code with GCC: This compiles the code, creates the object file hello.o , and links the executable file helloworld from the object file. Run the resulting executable file: 2.2.8. Example: Building a C program with GCC (compiling and linking in two steps) This example shows the exact steps to build a simple sample C program. In this example, compiling and linking the code are two separate steps. Prerequisites You must understand how to use GCC. Procedure Create a directory hello-c and change to it: Create file hello.c with the following contents: #include <stdio.h> int main() { printf("Hello, World!\n"); return 0; } Compile the code with GCC: The object file hello.o is created. Link an executable file helloworld from the object file: Run the resulting executable file: 2.2.9. Example: Building a C++ program with GCC (compiling and linking in one step) This example shows the exact steps to build a sample minimal C++ program. In this example, compiling and linking the code is done in one step. Prerequisites You must understand the difference between gcc and g++ . Procedure Create a directory hello-cpp and change to it: Create file hello.cpp with the following contents: #include <iostream> int main() { std::cout << "Hello, World!\n"; return 0; } Compile and link the code with g++ : This compiles the code, creates the object file hello.o , and links the executable file helloworld from the object file. Run the resulting executable file: 2.2.10. Example: Building a C++ program with GCC (compiling and linking in two steps) This example shows the exact steps to build a sample minimal C++ program. In this example, compiling and linking the code are two separate steps. Prerequisites You must understand the difference between gcc and g++ . Procedure Create a directory hello-cpp and change to it: Create file hello.cpp with the following contents: #include <iostream> int main() { std::cout << "Hello, World!\n"; return 0; } Compile the code with g++ : The object file hello.o is created. Link an executable file helloworld from the object file: Run the resulting executable file: 2.3. Using Libraries with GCC Learn about using libraries in code. 2.3.1. Library naming conventions A special file name convention is used for libraries: a library known as foo is expected to exist as file lib foo .so or lib foo .a . This convention is automatically understood by the linking input options of GCC, but not by the output options: When linking against the library, the library can be specified only by its name foo with the -l option as -l foo : When creating the library, the full file name lib foo .so or lib foo .a must be specified. 2.3.2. Static and dynamic linking Developers have a choice of using static or dynamic linking when building applications with fully compiled languages. It is important to understand the differences between static and dynamic linking, particularly in the context using the C and C++ languages on Red Hat Enterprise Linux. To summarize, Red Hat discourages the use of static linking in applications for Red Hat Enterprise Linux. Comparison of static and dynamic linking Static linking makes libraries part of the resulting executable file. Dynamic linking keeps these libraries as separate files. Dynamic and static linking can be compared in a number of ways: Resource use Static linking results in larger executable files which contain more code. This additional code coming from libraries cannot be shared across multiple programs on the system, increasing file system usage and memory usage at run time. Multiple processes running the same statically linked program will still share the code. On the other hand, static applications need fewer run-time relocations, leading to reduced startup time, and require less private resident set size (RSS) memory. Generated code for static linking can be more efficient than for dynamic linking due to the overhead introduced by position-independent code (PIC). Security Dynamically linked libraries which provide ABI compatibility can be updated without changing the executable files depending on these libraries. This is especially important for libraries provided by Red Hat as part of Red Hat Enterprise Linux, where Red Hat provides security updates. Static linking against any such libraries is strongly discouraged. Compatibility Static linking appears to provide executable files independent of the versions of libraries provided by the operating system. However, most libraries depend on other libraries. With static linking, this dependency becomes inflexible and as a result, both forward and backward compatibility is lost. Static linking is guaranteed to work only on the system where the executable file was built. Warning Applications linking statically libraries from the GNU C library ( glibc ) still require glibc to be present on the system as a dynamic library. Furthermore, the dynamic library variant of glibc available at the application's run time must be a bitwise identical version to that present while linking the application. As a result, static linking is guaranteed to work only on the system where the executable file was built. Support coverage Most static libraries provided by Red Hat are in the CodeReady Linux Builder channel and not supported by Red Hat. Functionality Some libraries, notably the GNU C Library ( glibc ), offer reduced functionality when linked statically. For example, when statically linked, glibc does not support threads and any form of calls to the dlopen() function in the same program. As a result of the listed disadvantages, static linking should be avoided at all costs, particularly for whole applications and the glibc and libstdc++ libraries. Cases for static linking Static linking might be a reasonable choice in some cases, such as: Using a library which is not enabled for dynamic linking. Fully static linking can be required for running code in an empty chroot environment or container. However, static linking using the glibc-static package is not supported by Red Hat. 2.3.3. Link time optimization Link time optimization (LTO) enables the compiler to perform various optimizations across all translation units of your program by using its intermediate representation at link time. As a result, your executable files and libraries are smaller and run faster. Also, you can analyze package source code at compile time more thoroughly by using LTO, which improves various GCC diagnostics for potential coding errors. Known issues Violating the One Definition Rule (ODR) produces a -Wodr warning Violations of the ODR resulting in undefined behavior produce a -Wodr warning. This usually points to a bug in your program. The -Wodr warning is enabled by default. LTO causes increased memory consumption The compiler consumes more memory when it processes the translation units the program consists of. On systems with limited memory, disable LTO or lower the parallelism level when building your program. GCC removes seemingly unused functions GCC may remove functions it considers unused because the compiler is unable to recognize which symbols an asm() statement references. A compilation error may occur as a result. To prevent this, add __attribute__((used)) to the symbols you use in your program. Compiling with the -fPIC option causes errors Because GCC does not parse the contents of asm() statements, compiling your code with the -fPIC command-line option can cause errors. To prevent this, use the -fno-lto option when compiling your translation unit. More information is available at LTO FAQ - Symbol usage from assembly language . Symbol versioning by using the .symver directive is not compatible with LTO Implementing symbol versioning by using the .symver directive in an asm() statement is not compatible with LTO. However, it is possible to implement symbol versioning using the symver attribute. For example: __attribute__ ((_symver_ ("< symbol >@VERS_1"))) void < symbol >_v1 (void) { } Additional resources GCC Manual - Function Attributes GCC Wiki - Link Time Optimization 2.3.4. Using a library with GCC A library is a package of code which can be reused in your program. A C or C++ library consists of two parts: The library code Header files Compiling code that uses a library The header files describe the interface of the library: the functions and variables available in the library. Information from the header files is needed for compiling the code. Typically, header files of a library will be placed in a different directory than your application's code. To tell GCC where the header files are, use the -I option: Replace include_path with the actual path to the header file directory. The -I option can be used multiple times to add multiple directories with header files. When looking for a header file, these directories are searched in the order of their appearance in the -I options. Linking code that uses a library When linking the executable file, both the object code of your application and the binary code of the library must be available. The code for static and dynamic libraries is present in different forms: Static libraries are available as archive files. They contain a group of object files. The archive file has a file name extension .a . Dynamic libraries are available as shared objects. They are a form of an executable file. A shared object has a file name extension .so . To tell GCC where the archives or shared object files of a library are, use the -L option: Replace library_path with the actual path to the library directory. The -L option can be used multiple times to add multiple directories. When looking for a library, these directories are searched in the order of their -L options. The order of options matters: GCC cannot link against a library foo unless it knows the directory with this library. Therefore, use the -L options to specify library directories before using the -l options for linking against libraries. Compiling and linking code which uses a library in one step When the situation allows the code to be compiled and linked in one gcc command, use the options for both situations mentioned above at once. Additional resources Using the GNU Compiler Collection (GCC) - Options for Directory Search Using the GNU Compiler Collection (GCC) - Options for Linking 2.3.5. Using a static library with GCC Static libraries are available as archives containing object files. After linking, they become part of the resulting executable file. Note Red Hat discourages use of static linking for security reasons. See Section 2.3.2, "Static and dynamic linking" . Use static linking only when necessary, especially against libraries provided by Red Hat. Prerequisites GCC must be installed on your system. You must understand static and dynamic linking. You have a set of source or object files forming a valid program, requiring some static library foo and no other libraries. The foo library is available as a file libfoo.a , and no file libfoo.so is provided for dynamic linking. Note Most libraries which are part of Red Hat Enterprise Linux are supported for dynamic linking only. The steps below only work for libraries which are not enabled for dynamic linking. Procedure To link a program from source and object files, adding a statically linked library foo , which is to be found as a file libfoo.a : Change to the directory containing your code. Compile the program source files with headers of the foo library: Replace header_path with a path to a directory containing the header files for the foo library. Link the program with the foo library: Replace library_path with a path to a directory containing the file libfoo.a . To run the program later, simply: Warning The -static GCC option related to static linking forbids all dynamic linking. Instead, use the -Wl,-Bstatic and -Wl,-Bdynamic options to control linker behavior more precisely. See Section 2.3.7, "Using both static and dynamic libraries with GCC" . 2.3.6. Using a dynamic library with GCC Dynamic libraries are available as standalone executable files, required at both linking time and run time. They stay independent of your application's executable file. Prerequisites GCC must be installed on the system. A set of source or object files forming a valid program, requiring some dynamic library foo and no other libraries. The foo library must be available as a file libfoo.so . Linking a program against a dynamic library To link a program against a dynamic library foo : When a program is linked against a dynamic library, the resulting program must always load the library at run time. There are two options for locating the library: Using a rpath value stored in the executable file itself Using the LD_LIBRARY_PATH variable at run time Using a rpath Value Stored in the Executable File The rpath is a special value saved as a part of an executable file when it is being linked. Later, when the program is loaded from its executable file, the runtime linker will use the rpath value to locate the library files. While linking with GCC , to store the path library_path as rpath : The path library_path must point to a directory containing the file libfoo.so . Important Do not add a space after the comma in the -Wl,-rpath= option. To run the program later: Using the LD_LIBRARY_PATH environment variable If no rpath is found in the program's executable file, the runtime linker will use the LD_LIBRARY_PATH environment variable. The value of this variable must be changed for each program. This value should represent the path where the shared library objects are located. To run the program without rpath set, with libraries present in path library_path : Leaving out the rpath value offers flexibility, but requires setting the LD_LIBRARY_PATH variable every time the program is to run. Placing the Library into the Default Directories The runtime linker configuration specifies a number of directories as a default location of dynamic library files. To use this default behaviour, copy your library to the appropriate directory. A full description of the dynamic linker behavior is out of scope of this document. For more information, see the following resources: Linux manual pages for the dynamic linker: Contents of the /etc/ld.so.conf configuration file: Report of the libraries recognized by the dynamic linker without additional configuration, which includes the directories: 2.3.7. Using both static and dynamic libraries with GCC Sometimes it is required to link some libraries statically and some dynamically. This situation brings some challenges. Prerequisites Understanding static and dynamic linking Introduction gcc recognizes both dynamic and static libraries. When the -l foo option is encountered, gcc will first attempt to locate a shared object (a .so file) containing a dynamically linked version of the foo library, and then look for the archive file ( .a ) containing a static version of the library. Thus, the following situations can result from this search: Only the shared object is found, and gcc links against it dynamically. Only the archive is found, and gcc links against it statically. Both the shared object and archive are found, and by default, gcc selects dynamic linking against the shared object. Neither shared object nor archive is found, and linking fails. Because of these rules, the best way to select the static or dynamic version of a library for linking is having only that version found by gcc . This can be controlled to some extent by using or leaving out directories containing the library versions, when specifying the -L path options. Additionally, because dynamic linking is the default, the only situation where linking must be explicitly specified is when a library with both versions present should be linked statically. There are two possible resolutions: Specifying the static libraries by file path instead of the -l option Using the -Wl option to pass options to the linker Specifying the static libraries by file Usually, gcc is instructed to link against the foo library with the -l foo option. However, it is possible to specify the full path to file lib foo .a containing the library instead: From the file extension .a , gcc will understand that this is a library to link with the program. However, specifying the full path to the library file is a less flexible method. Using the -Wl option The gcc option -Wl is a special option for passing options to the underlying linker. Syntax of this option differs from the other gcc options. The -Wl option is followed by a comma-separated list of linker options, while other gcc options require space-separated list of options. The ld linker used by gcc offers the options -Bstatic and -Bdynamic to specify whether libraries following this option should be linked statically or dynamically, respectively. After passing -Bstatic and a library to the linker, the default dynamic linking behaviour must be restored manually for the following libraries to be linked dynamically with the -Bdynamic option. To link a program, link library first statically ( libfirst.a ) and second dynamically ( libsecond.so ): Note gcc can be configured to use linkers other than the default ld . Additional resources Using the GNU Compiler Collection (GCC) - 3.14 Options for Linking Documentation for binutils 2.27 - 2.1 Command Line Options 2.4. Creating libraries with GCC Learn about the steps to creating libraries and the necessary concepts used by the Linux operating system for libraries. 2.4.1. Library naming conventions A special file name convention is used for libraries: a library known as foo is expected to exist as file lib foo .so or lib foo .a . This convention is automatically understood by the linking input options of GCC, but not by the output options: When linking against the library, the library can be specified only by its name foo with the -l option as -l foo : When creating the library, the full file name lib foo .so or lib foo .a must be specified. 2.4.2. The soname mechanism Dynamically loaded libraries (shared objects) use a mechanism called soname to manage multiple compatible versions of a library. Prerequisites You must understand dynamic linking and libraries. You must understand the concept of ABI compatibility. You must understand library naming conventions. You must understand symbolic links. Problem introduction A dynamically loaded library (shared object) exists as an independent executable file. This makes it possible to update the library without updating the applications that depend on it. However, the following problems arise with this concept: Identification of the actual version of the library Need for multiple versions of the same library present Signalling ABI compatibility of each of the multiple versions The soname mechanism To resolve this, Linux uses a mechanism called soname. A foo library version X.Y is ABI-compatible with other versions with the same value of X in a version number. Minor changes preserving compatibility increase the number Y . Major changes that break compatibility increase the number X . The actual foo library version X.Y exists as a file libfoo.so. x . y . Inside the library file, a soname is recorded with value libfoo.so.x to signal the compatibility. When applications are built, the linker looks for the library by searching for the file libfoo.so . A symbolic link with this name must exist, pointing to the actual library file. The linker then reads the soname from the library file and records it into the application executable file. Finally, the linker creates the application that declares dependency on the library using the soname, not a name or a file name. When the runtime dynamic linker links an application before running, it reads the soname from application's executable file. This soname is libfoo.so. x . A symbolic link with this name must exist, pointing to the actual library file. This allows loading the library, regardless of the Y component of a version, because the soname does not change. Note The Y component of the version number is not limited to just a single number. Additionally, some libraries encode their version in their name. Reading soname from a file To display the soname of a library file somelibrary : Replace somelibrary with the actual file name of the library you wish to examine. 2.4.3. Creating dynamic libraries with GCC Dynamically linked libraries (shared objects) allow: resource conservation through code reuse increased security by making it easier to update the library code Follow these steps to build and install a dynamic library from source. Prerequisites You must understand the soname mechanism. GCC must be installed on the system. You must have source code for a library. Procedure Change to the directory with library sources. Compile each source file to an object file with the Position independent code option -fPIC : The object files have the same file names as the original source code files, but their extension is .o . Link the shared library from the object files: The used major version number is X and minor version number Y. Copy the libfoo.so.x.y file to an appropriate location, where the system's dynamic linker can find it. On Red Hat Enterprise Linux, the directory for libraries is /usr/lib64 : Note that you need root permissions to manipulate files in this directory. Create the symlink structure for soname mechanism: Additional resources The Linux Documentation Project - Program Library HOWTO - 3. Shared Libraries 2.4.4. Creating static libraries with GCC and ar Creating libraries for static linking is possible through conversion of object files into a special type of archive file. Note Red Hat discourages the use of static linking for security reasons. Use static linking only when necessary, especially against libraries provided by Red Hat. See Section 2.3.2, "Static and dynamic linking" for more details. Prerequisites GCC and binutils must be installed on the system. You must understand static and dynamic linking. Source file(s) with functions to be shared as a library are available. Procedure Create intermediate object files with GCC. Append more source files if required. The resulting object files share the file name but use the .o file name extension. Turn the object files into a static library (archive) using the ar tool from the binutils package. File libfoo.a is created. Use the nm command to inspect the resulting archive: Copy the static library file to the appropriate directory. When linking against the library, GCC will automatically recognize from the .a file name extension that the library is an archive for static linking. Additional resources Linux manual page for ar(1) : 2.5. Managing More Code with Make The GNU make utility, commonly abbreviated make , is a tool for controlling the generation of executables from source files. make automatically determines which parts of a complex program have changed and need to be recompiled. make uses configuration files called Makefiles to control the way programs are built. 2.5.1. GNU make and Makefile overview To create a usable form (usually executable files) from the source files of a particular project, perform several necessary steps. Record the actions and their sequence to be able to repeat them later. Red Hat Enterprise Linux contains GNU make , a build system designed for this purpose. Prerequisites Understanding the concepts of compiling and linking GNU make GNU make reads Makefiles which contain the instructions describing the build process. A Makefile contains multiple rules that describe a way to satisfy a certain condition ( target ) with a specific action ( recipe ). Rules can hierarchically depend on another rule. Running make without any options makes it look for a Makefile in the current directory and attempt to reach the default target. The actual Makefile file name can be one of Makefile , makefile , and GNUmakefile . The default target is determined from the Makefile contents. Makefile details Makefiles use a relatively simple syntax for defining variables and rules , which consists of a target and a recipe . The target specifies what is the output if a rule is executed. The lines with recipes must start with the TAB character. Typically, a Makefile contains rules for compiling source files, a rule for linking the resulting object files, and a target that serves as the entry point at the top of the hierarchy. Consider the following Makefile for building a C program which consists of a single file, hello.c . all: hello hello: hello.o gcc hello.o -o hello hello.o: hello.c gcc -c hello.c -o hello.o This example shows that to reach the target all , file hello is required. To get hello , one needs hello.o (linked by gcc ), which in turn is created from hello.c (compiled by gcc ). The target all is the default target because it is the first target that does not start with a period (.). Running make without any arguments is then identical to running make all , when the current directory contains this Makefile . Typical makefile A more typical Makefile uses variables for generalization of the steps and adds a target "clean" - remove everything but the source files. CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE) Adding more source files to such Makefile requires only adding them to the line where the SOURCE variable is defined. Additional resources GNU make: Introduction - 2 An Introduction to Makefiles 2.5.2. Example: Building a C program using a Makefile Build a sample C program using a Makefile by following the steps in this example. Prerequisites You must understand the concepts of Makefiles and make . Procedure Create a directory hellomake and change to this directory: Create a file hello.c with the following contents: #include <stdio.h> int main(int argc, char *argv[]) { printf("Hello, World!\n"); return 0; } Create a file Makefile with the following contents: CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE) Important The Makefile recipe lines must start with the tab character! When copying the text above from the documentation, the cut-and-paste process may paste spaces instead of tabs. If this happens, correct the issue manually. Run make : This creates an executable file hello . Run the executable file hello : Run the Makefile target clean to remove the created files: 2.5.3. Documentation resources for make For more information about make , see the resources listed below. Installed documentation Use the man and info tools to view manual pages and information pages installed on your system: Online documentation The GNU Make Manual hosted by the Free Software Foundation
[ "man gcc", "gcc -c source.c another_source.c", "gcc ... -g", "man gcc", "man gcc", "gcc ... -O2 -g -Wall -Wl,-z,now,-z,relro -fstack-protector-strong -fstack-clash-protection -D_FORTIFY_SOURCE=2", "gcc ... -Walloc-zero -Walloca-larger-than -Wextra -Wformat-security -Wvla-larger-than", "gcc ... objfile.o another_object.o ... -o executable-file", "mkdir hello-c cd hello-c", "#include <stdio.h> int main() { printf(\"Hello, World!\\n\"); return 0; }", "gcc hello.c -o helloworld", "./helloworld Hello, World!", "mkdir hello-c cd hello-c", "#include <stdio.h> int main() { printf(\"Hello, World!\\n\"); return 0; }", "gcc -c hello.c", "gcc hello.o -o helloworld", "./helloworld Hello, World!", "mkdir hello-cpp cd hello-cpp", "#include <iostream> int main() { std::cout << \"Hello, World!\\n\"; return 0; }", "g++ hello.cpp -o helloworld", "./helloworld Hello, World!", "mkdir hello-cpp cd hello-cpp", "#include <iostream> int main() { std::cout << \"Hello, World!\\n\"; return 0; }", "g++ -c hello.cpp", "g++ hello.o -o helloworld", "./helloworld Hello, World!", "gcc ... -l foo", "gcc ... -I include_path", "gcc ... -L library_path -l foo", "gcc ... -I header_path -c", "gcc ... -L library_path -l foo", "./program", "gcc ... -L library_path -l foo", "gcc ... -L library_path -l foo -Wl,-rpath= library_path", "./program", "export LD_LIBRARY_PATH= library_path :USDLD_LIBRARY_PATH ./program", "man ld.so", "cat /etc/ld.so.conf", "ldconfig -v", "gcc ... path/to/libfoo.a", "gcc ... -Wl,-Bstatic -l first -Wl,-Bdynamic -l second", "gcc ... -l foo", "objdump -p somelibrary | grep SONAME", "gcc ... -c -fPIC some_file.c", "gcc -shared -o libfoo.so.x.y -Wl,-soname, libfoo.so.x some_file.o", "cp libfoo.so.x.y /usr/lib64", "ln -s libfoo.so.x.y libfoo.so.x ln -s libfoo.so.x libfoo.so", "gcc -c source_file.c", "ar rcs lib foo .a source_file.o", "nm libfoo.a", "gcc ... -l foo", "man ar", "all: hello hello: hello.o gcc hello.o -o hello hello.o: hello.c gcc -c hello.c -o hello.o", "CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE)", "mkdir hellomake cd hellomake", "#include <stdio.h> int main(int argc, char *argv[]) { printf(\"Hello, World!\\n\"); return 0; }", "CC=gcc CFLAGS=-c -Wall SOURCE=hello.c OBJ=USD(SOURCE:.c=.o) EXE=hello all: USD(SOURCE) USD(EXE) USD(EXE): USD(OBJ) USD(CC) USD(OBJ) -o USD@ %.o: %.c USD(CC) USD(CFLAGS) USD< -o USD@ clean: rm -rf USD(OBJ) USD(EXE)", "make gcc -c -Wall hello.c -o hello.o gcc hello.o -o hello", "./hello Hello, World!", "make clean rm -rf hello.o hello", "man make info make" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/developing_c_and_cpp_applications_in_rhel_9/assembly_creating-c-or-cpp-applications_developing-applications
Chapter 7. Known issues
Chapter 7. Known issues There are no known issues for this release.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_web_server/6.0/html/red_hat_jboss_web_server_6.0_service_pack_1_release_notes/known_issues
E.5. GRUB Terminology
E.5. GRUB Terminology One of the most important things to understand before using GRUB is how the program refers to devices, such as hard drives and partitions. This information is particularly important when configuring GRUB to boot multiple operating systems. E.5.1. Device Names When referring to a specific device with GRUB, do so using the following format (note that the parentheses and comma are very important syntactically): ( <type-of-device><bios-device-number> , <partition-number> ) The <type-of-device> specifies the type of device from which GRUB boots. The two most common options are hd for a hard disk or fd for a 3.5 diskette. A lesser used device type is also available called nd for a network disk. Instructions on configuring GRUB to boot over the network are available online at http://www.gnu.org/software/grub/manual/ . The <bios-device-number> is the BIOS device number. The primary IDE hard drive is numbered 0 and a secondary IDE hard drive is numbered 1 . This syntax is roughly equivalent to that used for devices by the kernel. For example, the a in hda for the kernel is analogous to the 0 in hd0 for GRUB, the b in hdb is analogous to the 1 in hd1 , and so on. The <partition-number> specifies the number of a partition on a device. Like the <bios-device-number> , most types of partitions are numbered starting at 0 . However, BSD partitions are specified using letters, with a corresponding to 0 , b corresponding to 1 , and so on. Note The numbering system for devices under GRUB always begins with 0 , not 1 . Failing to make this distinction is one of the most common mistakes made by new users. To give an example, if a system has more than one hard drive, GRUB refers to the first hard drive as (hd0) and the second as (hd1) . Likewise, GRUB refers to the first partition on the first drive as (hd0,0) and the third partition on the second hard drive as (hd1,2) . In general the following rules apply when naming devices and partitions under GRUB: It does not matter if system hard drives are IDE or SCSI, all hard drives begin with the letters hd . The letters fd are used to specify 3.5 diskettes. To specify an entire device without respect to partitions, leave off the comma and the partition number. This is important when telling GRUB to configure the MBR for a particular disk. For example, (hd0) specifies the MBR on the first device and (hd3) specifies the MBR on the fourth device. If a system has multiple drive devices, it is very important to know how the drive boot order is set in the BIOS. This is a simple task if a system has only IDE or SCSI drives, but if there is a mix of devices, it becomes critical that the type of drive with the boot partition be accessed first.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-grub-terminology
Chapter 5. Gathering data about your cluster
Chapter 5. Gathering data about your cluster When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. It is recommended to provide: Data gathered using the oc adm must-gather command The unique cluster ID 5.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.11 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-namespace option. For example: USD oc adm must-gather --run-namespace <namespace> \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.11 5.1.1. Gathering data about your cluster for Red Hat Support You can gather debugging information about your cluster by using the oc adm must-gather CLI command. Prerequisites You have access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) is installed. Procedure Navigate to the directory where you want to store the must-gather data. Note If your cluster is in a disconnected environment, you must take additional steps. If your mirror registry has a trusted CA, you must first add the trusted CA to the cluster. For all clusters in disconnected environments, you must import the default must-gather image as an image stream. USD oc import-image is/must-gather -n openshift Run the oc adm must-gather command: USD oc adm must-gather Important If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. Note Because this command picks a random control plane node by default, the pod might be scheduled to a control plane node that is in the NotReady and SchedulingDisabled state. If this command fails, for example, if you cannot schedule a pod on your cluster, then use the oc adm inspect command to gather information for particular resources. Note Contact Red Hat Support for the recommended resources to gather. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 5.1.2. Gathering data about specific features You can gather debugging information about specific features by using the oc adm must-gather CLI command with the --image or --image-stream argument. The must-gather tool supports multiple images, so you can gather data about more than one feature by running a single command. Table 5.1. Supported must-gather images Image Purpose registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.11 Data collection for OpenShift Virtualization. registry.redhat.io/openshift-serverless-1/svls-must-gather-rhel8 Data collection for OpenShift Serverless. registry.redhat.io/openshift-service-mesh/istio-must-gather-rhel8:<installed_version_service_mesh> Data collection for Red Hat OpenShift Service Mesh. registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v<installed_version_migration_toolkit> Data collection for the Migration Toolkit for Containers. registry.redhat.io/odf4/odf-must-gather-rhel9:v<installed_version_ODF> Data collection for Red Hat OpenShift Data Foundation. registry.redhat.io/openshift-logging/cluster-logging-rhel9-operator:v<installed_version_logging> Data collection for logging. quay.io/netobserv/must-gather Data collection for the Network Observability Operator. registry.redhat.io/openshift4/ose-csi-driver-shared-resource-mustgather-rhel8 Data collection for OpenShift Shared Resource CSI Driver. registry.redhat.io/openshift4/ose-local-storage-mustgather-rhel8:v<installed_version_LSO> Data collection for Local Storage Operator. registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel8:v<installed_version_sandboxed_containers> Data collection for OpenShift sandboxed containers. registry.redhat.io/workload-availability/node-healthcheck-must-gather-rhel8:v<installed-version-NHC> Data collection for the Red Hat Workload Availability Operators, including the Self Node Remediation (SNR) Operator, the Fence Agents Remediation (FAR) Operator, the Machine Deletion Remediation (MDR) Operator, the Node Health Check Operator (NHC) Operator, and the Node Maintenance Operator (NMO) Operator. registry.redhat.io/numaresources/numaresources-must-gather-rhel9:v<installed-version-nro> Data collection for the NUMA Resources Operator (NRO). registry.redhat.io/openshift4/ptp-must-gather-rhel8:v<installed-version-ptp> Data collection for the PTP Operator. registry.redhat.io/openshift-gitops-1/must-gather-rhel8:v<installed_version_GitOps> Data collection for Red Hat OpenShift GitOps. registry.redhat.io/openshift4/ose-secrets-store-csi-mustgather-rhel8:v<installed_version_secret_store> Data collection for the Secrets Store CSI Driver Operator. registry.redhat.io/lvms4/lvms-must-gather-rhel9:v<installed_version_LVMS> Data collection for the LVM Operator. registry.redhat.io/compliance/openshift-compliance-must-gather-rhel8:<digest-version> Data collection for the Compliance Operator. registry.redhat.io/rhacm2/acm-must-gather-rhel9:v<ACM_version> Data collection for Red Hat Advanced Cluster Management (RHACM) 2.10 and later. registry.redhat.io/rhacm2/acm-must-gather-rhel8:v<ACM_version> Data collection for RHACM 2.9 and earlier. <registry_name:port_number>/rhacm2/acm-must-gather-rhel9:v<ACM_version> Data collection for RHACM 2.10 and later in a disconnected environment. <registry_name:port_number>/rhacm2/acm-must-gather-rhel8:v<ACM_version> Data collection for RHACM 2.9 and earlier in a disconnected environment. Note To determine the latest version for an OpenShift Container Platform component's image, see the Red Hat OpenShift Container Platform Life Cycle Policy web page on the Red Hat Customer Portal. Prerequisites You have access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI ( oc ) is installed. Procedure Navigate to the directory where you want to store the must-gather data. Run the oc adm must-gather command with one or more --image or --image-stream arguments. Note To collect the default must-gather data in addition to specific feature data, add the --image-stream=openshift/must-gather argument. For information on gathering data about the Custom Metrics Autoscaler, see the Additional resources section that follows. For example, the following command gathers both the default cluster data and information specific to OpenShift Virtualization: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.11 2 1 The default OpenShift Container Platform must-gather image 2 The must-gather image for OpenShift Virtualization You can use the must-gather tool with additional arguments to gather data that is specifically related to OpenShift Logging and the Red Hat OpenShift Logging Operator in your cluster. For OpenShift Logging, run the following command: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator \ -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') Example 5.1. Example must-gather output for OpenShift Logging ├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── .insecure.log │ │ │ │ └── .log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── .insecure.log │ │ │ └── .log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├── ... Run the oc adm must-gather command with one or more --image or --image-stream arguments. For example, the following command gathers both the default cluster data and information specific to KubeVirt: USD oc adm must-gather \ --image-stream=openshift/must-gather \ 1 --image=quay.io/kubevirt/must-gather 2 1 The default OpenShift Container Platform must-gather image 2 The must-gather image for KubeVirt Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1 1 Make sure to replace must-gather-local.5421342344627712289/ with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 5.2. Additional resources Gathering debugging data for the Custom Metrics Autoscaler. Red Hat OpenShift Container Platform Life Cycle Policy 5.2.1. Gathering network logs You can gather network logs on all nodes in a cluster. Procedure Run the oc adm must-gather command with -- gather_network_logs : USD oc adm must-gather -- gather_network_logs Note By default, the must-gather tool collects the OVN nbdb and sbdb databases from all of the nodes in the cluster. Adding the -- gather_network_logs option to include additional logs that contain OVN-Kubernetes transactions for OVN nbdb database. Create a compressed file from the must-gather directory that was just created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1 1 Replace must-gather-local.472290403699006248 with the actual directory name. Attach the compressed file to your support case on the the Customer Support page of the Red Hat Customer Portal. 5.3. Obtaining your cluster ID When providing information to Red Hat Support, it is helpful to provide the unique identifier for your cluster. You can have your cluster ID autofilled by using the OpenShift Container Platform web console. You can also manually obtain your cluster ID by using the web console or the OpenShift CLI ( oc ). Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the web console or the OpenShift CLI ( oc ) installed. Procedure To open a support case and have your cluster ID autofilled using the web console: From the toolbar, navigate to (?) Help and select Share Feedback from the list. Click Open a support case from the Tell us about your experience window. To manually obtain your cluster ID using the web console: Navigate to Home Overview . The value is available in the Cluster ID field of the Details section. To obtain your cluster ID using the OpenShift CLI ( oc ), run the following command: USD oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}' 5.4. About sosreport sosreport is a tool that collects configuration details, system information, and diagnostic data from Red Hat Enterprise Linux (RHEL) and Red Hat Enterprise Linux CoreOS (RHCOS) systems. sosreport provides a standardized way to collect diagnostic information relating to a node, which can then be provided to Red Hat Support for issue diagnosis. In some support interactions, Red Hat Support may ask you to collect a sosreport archive for a specific OpenShift Container Platform node. For example, it might sometimes be necessary to review system logs or other node-specific data that is not included within the output of oc adm must-gather . 5.5. Generating a sosreport archive for an OpenShift Container Platform cluster node The recommended way to generate a sosreport for an OpenShift Container Platform 4.14 cluster node is through a debug pod. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have SSH access to your hosts. You have installed the OpenShift CLI ( oc ). You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have an existing Red Hat Support case ID. Procedure Obtain a list of cluster nodes: USD oc get nodes Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node To enter into a debug session on the target node that is tainted with the NoExecute effect, add a toleration to a dummy namespace, and start the debug pod in the dummy namespace: USD oc new-project dummy USD oc patch namespace dummy --type=merge -p '{"metadata": {"annotations": { "scheduler.alpha.kubernetes.io/defaultTolerations": "[{\"operator\": \"Exists\"}]"}}}' USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. Start a toolbox container, which includes the required binaries and plugins to run sosreport : # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues with sosreport plugins. Collect a sosreport archive. Run the sos report command to collect necessary troubleshooting data on crio and podman : # sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1 1 -k enables you to define sosreport plugin parameters outside of the defaults. Optional: To include information on OVN-Kubernetes networking configurations from a node in your report, run the following command: # sos report --all-logs Press Enter when prompted, to continue. Provide the Red Hat Support case ID. sosreport adds the ID to the archive's file name. The sosreport output provides the archive's location and checksum. The following sample output references support case ID 01234567 : Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e 1 The sosreport archive's file path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . Provide the sosreport archive to Red Hat Support for analysis, using one of the following methods. Upload the file to an existing Red Hat support case. Concatenate the sosreport archive by running the oc debug node/<node_name> command and redirect the output to a file. This command assumes you have exited the oc debug session: USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a sosreport archive from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy a sosreport archive from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 5.6. Querying bootstrap node journal logs If you experience bootstrap-related issues, you can gather bootkube.service journald unit logs and container logs from the bootstrap node. Prerequisites You have SSH access to your bootstrap node. You have the fully qualified domain name of the bootstrap node. Procedure Query bootkube.service journald unit logs from a bootstrap node during OpenShift Container Platform installation. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. Collect logs from the bootstrap node containers using podman on the bootstrap node. Replace <bootstrap_fqdn> with the bootstrap node's fully qualified domain name: USD ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done' 5.7. Querying cluster node journal logs You can gather journald unit logs and other logs within /var/log on individual cluster nodes. Prerequisites You have access to the cluster as a user with the cluster-admin role. Your API service is still functional. You have installed the OpenShift CLI ( oc ). You have SSH access to your hosts. Procedure Query kubelet journald unit logs from OpenShift Container Platform cluster nodes. The following example queries control plane nodes only: USD oc adm node-logs --role=master -u kubelet 1 1 Replace kubelet as appropriate to query other unit logs. Collect logs from specific subdirectories under /var/log/ on cluster nodes. Retrieve a list of logs contained within a /var/log/ subdirectory. The following example lists files in /var/log/openshift-apiserver/ on all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver Inspect a specific log within a /var/log/ subdirectory. The following example outputs /var/log/openshift-apiserver/audit.log contents from all control plane nodes: USD oc adm node-logs --role=master --path=openshift-apiserver/audit.log If the API is not functional, review the logs on each node using SSH instead. The following example tails /var/log/openshift-apiserver/audit.log : USD ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log Note OpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running oc adm must gather and other oc commands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> . 5.8. Network trace methods Collecting network traces, in the form of packet capture records, can assist Red Hat Support with troubleshooting network issues. OpenShift Container Platform supports two ways of performing a network trace. Review the following table and choose the method that meets your needs. Table 5.2. Supported methods of collecting a network trace Method Benefits and capabilities Collecting a host network trace You perform a packet capture for a duration that you specify on one or more nodes at the same time. The packet capture files are transferred from nodes to the client machine when the specified duration is met. You can troubleshoot why a specific action triggers network communication issues. Run the packet capture, perform the action that triggers the issue, and use the logs to diagnose the issue. Collecting a network trace from an OpenShift Container Platform node or container You perform a packet capture on one node or one container. You run the tcpdump command interactively, so you can control the duration of the packet capture. You can start the packet capture manually, trigger the network communication issue, and then stop the packet capture manually. This method uses the cat command and shell redirection to copy the packet capture data from the node or container to the client machine. 5.9. Collecting a host network trace Sometimes, troubleshooting a network-related issue is simplified by tracing network communication and capturing packets on multiple nodes at the same time. You can use a combination of the oc adm must-gather command and the registry.redhat.io/openshift4/network-tools-rhel8 container image to gather packet captures from nodes. Analyzing packet captures can help you troubleshoot network communication issues. The oc adm must-gather command is used to run the tcpdump command in pods on specific nodes. The tcpdump command records the packet captures in the pods. When the tcpdump command exits, the oc adm must-gather command transfers the files with the packet captures from the pods to your client machine. Tip The sample command in the following procedure demonstrates performing a packet capture with the tcpdump command. However, you can run any command in the container image that is specified in the --image argument to gather troubleshooting information from multiple nodes at the same time. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Run a packet capture from the host network on some nodes by running the following command: USD oc adm must-gather \ --dest-dir /tmp/captures \ <.> --source-dir '/tmp/tcpdump/' \ <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \ <.> --node-selector 'node-role.kubernetes.io/worker' \ <.> --host-network=true \ <.> --timeout 30s \ <.> -- \ tcpdump -i any \ <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300 <.> The --dest-dir argument specifies that oc adm must-gather stores the packet captures in directories that are relative to /tmp/captures on the client machine. You can specify any writable directory. <.> When tcpdump is run in the debug pod that oc adm must-gather starts, the --source-dir argument specifies that the packet captures are temporarily stored in the /tmp/tcpdump directory on the pod. <.> The --image argument specifies a container image that includes the tcpdump command. <.> The --node-selector argument and example value specifies to perform the packet captures on the worker nodes. As an alternative, you can specify the --node-name argument instead to run the packet capture on a single node. If you omit both the --node-selector and the --node-name argument, the packet captures are performed on all nodes. <.> The --host-network=true argument is required so that the packet captures are performed on the network interfaces of the node. <.> The --timeout argument and value specify to run the debug pod for 30 seconds. If you do not specify the --timeout argument and a duration, the debug pod runs for 10 minutes. <.> The -i any argument for the tcpdump command specifies to capture packets on all network interfaces. As an alternative, you can specify a network interface name. Perform the action, such as accessing a web application, that triggers the network communication issue while the network trace captures packets. Review the packet capture files that oc adm must-gather transferred from the pods to your client machine: tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca... │ └── 2022-01-13T19:31:30.pcap ├── ip-... └── timestamp 1 2 The packet captures are stored in directories that identify the hostname, container, and file name. If you did not specify the --node-selector argument, then the directory level for the hostname is not present. 5.10. Collecting a network trace from an OpenShift Container Platform node or container When investigating potential network-related OpenShift Container Platform issues, Red Hat Support might request a network packet trace from a specific OpenShift Container Platform cluster node or from a specific container. The recommended method to capture a network trace in OpenShift Container Platform is through a debug pod. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have an existing Red Hat Support case ID. You have SSH access to your hosts. Procedure Obtain a list of cluster nodes: USD oc get nodes Enter into a debug session on the target node. This step instantiates a debug pod called <node_name>-debug : USD oc debug node/my-cluster-node Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Note OpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to access nodes using ssh core@<node>.<cluster_name>.<base_domain> instead. From within the chroot environment console, obtain the node's interface names: # ip ad Start a toolbox container, which includes the required binaries and plugins to run sosreport : # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . To avoid tcpdump issues, remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container. Initiate a tcpdump session on the cluster node and redirect output to a capture file. This example uses ens5 as the interface name: USD tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1 1 The tcpdump capture file's path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . If a tcpdump capture is required for a specific container on the node, follow these steps. Determine the target container ID. The chroot host command precedes the crictl command in this step because the toolbox container mounts the host's root directory at /host : # chroot /host crictl ps Determine the container's process ID. In this example, the container ID is a7fe32346b120 : # chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}' Initiate a tcpdump session on the container and redirect output to a capture file. This example uses 49628 as the container's process ID and ens5 as the interface name. The nsenter command enters the namespace of a target process and runs a command in its namespace. because the target process in this example is a container's process ID, the tcpdump command is run in the container's namespace from the host: # nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1 1 The tcpdump capture file's path is outside of the chroot environment because the toolbox container mounts the host's root directory at /host . Provide the tcpdump capture file to Red Hat Support for analysis, using one of the following methods. Upload the file to an existing Red Hat support case. Concatenate the sosreport archive by running the oc debug node/<node_name> command and redirect the output to a file. This command assumes you have exited the oc debug session: USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring a tcpdump capture file from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy a tcpdump capture file from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 5.11. Providing diagnostic data to Red Hat Support When investigating OpenShift Container Platform issues, Red Hat Support might ask you to upload diagnostic data to a support case. Files can be uploaded to a support case through the Red Hat Customer Portal. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have SSH access to your hosts. You have installed the OpenShift CLI ( oc ). You have a Red Hat standard or premium Subscription. You have a Red Hat Customer Portal account. You have an existing Red Hat Support case ID. Procedure Upload diagnostic data to an existing Red Hat support case through the Red Hat Customer Portal. Concatenate a diagnostic file contained on an OpenShift Container Platform node by using the oc debug node/<node_name> command and redirect the output to a file. The following example copies /host/var/tmp/my-diagnostic-data.tar.gz from a debug container to /var/tmp/my-diagnostic-data.tar.gz : USD oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1 1 The debug container mounts the host's root directory at /host . Reference the absolute path from the debug container's root directory, including /host , when specifying target files for concatenation. Note OpenShift Container Platform 4.14 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Transferring files from a cluster node by using scp is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node, oc operations will be impacted. In such situations, it is possible to copy diagnostic files from a node by running scp core@<node>.<cluster_name>.<base_domain>:<file_path> <local_path> . Navigate to an existing support case within the Customer Support page of the Red Hat Customer Portal. Select Attach files and follow the prompts to upload the file. 5.12. About toolbox toolbox is a tool that starts a container on a Red Hat Enterprise Linux CoreOS (RHCOS) system. The tool is primarily used to start a container that includes the required binaries and plugins that are needed to run commands such as sosreport . The primary purpose for a toolbox container is to gather diagnostic information and to provide it to Red Hat Support. However, if additional diagnostic tools are required, you can add RPM packages or run an image that is an alternative to the standard support tools image. Installing packages to a toolbox container By default, running the toolbox command starts a container with the registry.redhat.io/rhel8/support-tools:latest image. This image contains the most frequently used support tools. If you need to collect node-specific data that requires a support tool that is not part of the image, you can install additional packages. Prerequisites You have accessed a node with the oc debug node/<node_name> command. Procedure Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Start the toolbox container: # toolbox Install the additional package, such as wget : # dnf install -y <package_name> Starting an alternative image with toolbox By default, running the toolbox command starts a container with the registry.redhat.io/rhel8/support-tools:latest image. You can start an alternative image by creating a .toolboxrc file and specifying the image to run. Prerequisites You have accessed a node with the oc debug node/<node_name> command. Procedure Set /host as the root directory within the debug shell. The debug pod mounts the host's root file system in /host within the pod. By changing the root directory to /host , you can run binaries contained in the host's executable paths: # chroot /host Create a .toolboxrc file in the home directory for the root user ID: # vi ~/.toolboxrc REGISTRY=quay.io 1 IMAGE=fedora/fedora:33-x86_64 2 TOOLBOX_NAME=toolbox-fedora-33 3 1 Optional: Specify an alternative container registry. 2 Specify an alternative image to start. 3 Optional: Specify an alternative name for the toolbox container. Start a toolbox container with the alternative image: # toolbox Note If an existing toolbox pod is already running, the toolbox command outputs 'toolbox-' already exists. Trying to start... . Remove the running toolbox container with podman rm toolbox- and spawn a new toolbox container, to avoid issues with sosreport plugins.
[ "oc adm must-gather --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.11", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.11", "oc import-image is/must-gather -n openshift", "oc adm must-gather", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v4.14.11 2", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "├── cluster-logging │ ├── clo │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ ├── clusterlogforwarder_cr │ │ ├── cr │ │ ├── csv │ │ ├── deployment │ │ └── logforwarding_cr │ ├── collector │ │ ├── fluentd-2tr64 │ ├── eo │ │ ├── csv │ │ ├── deployment │ │ └── elasticsearch-operator-7dc7d97b9d-jb4r4 │ ├── es │ │ ├── cluster-elasticsearch │ │ │ ├── aliases │ │ │ ├── health │ │ │ ├── indices │ │ │ ├── latest_documents.json │ │ │ ├── nodes │ │ │ ├── nodes_stats.json │ │ │ └── thread_pool │ │ ├── cr │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ └── logs │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ ├── install │ │ ├── co_logs │ │ ├── install_plan │ │ ├── olmo_logs │ │ └── subscription │ └── kibana │ ├── cr │ ├── kibana-9d69668d4-2rkvz ├── cluster-scoped-resources │ └── core │ ├── nodes │ │ ├── ip-10-0-146-180.eu-west-1.compute.internal.yaml │ └── persistentvolumes │ ├── pvc-0a8d65d9-54aa-4c44-9ecc-33d9381e41c1.yaml ├── event-filter.html ├── gather-debug.log └── namespaces ├── openshift-logging │ ├── apps │ │ ├── daemonsets.yaml │ │ ├── deployments.yaml │ │ ├── replicasets.yaml │ │ └── statefulsets.yaml │ ├── batch │ │ ├── cronjobs.yaml │ │ └── jobs.yaml │ ├── core │ │ ├── configmaps.yaml │ │ ├── endpoints.yaml │ │ ├── events │ │ │ ├── elasticsearch-im-app-1596020400-gm6nl.1626341a296c16a1.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-9l9n4.1626341a2af81bbd.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-v98tk.1626341a2d821069.yaml │ │ │ ├── elasticsearch-im-app-1596020400-cc5vc.1626341a3019b238.yaml │ │ │ ├── elasticsearch-im-audit-1596020400-s8d5s.1626341a31f7b315.yaml │ │ │ ├── elasticsearch-im-infra-1596020400-7mgv8.1626341a35ea59ed.yaml │ │ ├── events.yaml │ │ ├── persistentvolumeclaims.yaml │ │ ├── pods.yaml │ │ ├── replicationcontrollers.yaml │ │ ├── secrets.yaml │ │ └── services.yaml │ ├── openshift-logging.yaml │ ├── pods │ │ ├── cluster-logging-operator-74dd5994f-6ttgt │ │ │ ├── cluster-logging-operator │ │ │ │ └── cluster-logging-operator │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ └── cluster-logging-operator-74dd5994f-6ttgt.yaml │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff │ │ │ ├── cluster-logging-operator-registry │ │ │ │ └── cluster-logging-operator-registry │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── cluster-logging-operator-registry-6df49d7d4-mxxff.yaml │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── mutate-csv-and-generate-sqlite-db │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── elasticsearch-cdm-lp8l38m0-1-794d6dd989-4jxms │ │ ├── elasticsearch-im-app-1596030300-bpgcx │ │ │ ├── elasticsearch-im-app-1596030300-bpgcx.yaml │ │ │ └── indexmanagement │ │ │ └── indexmanagement │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── fluentd-2tr64 │ │ │ ├── fluentd │ │ │ │ └── fluentd │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── fluentd-2tr64.yaml │ │ │ └── fluentd-init │ │ │ └── fluentd-init │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ ├── kibana-9d69668d4-2rkvz │ │ │ ├── kibana │ │ │ │ └── kibana │ │ │ │ └── logs │ │ │ │ ├── current.log │ │ │ │ ├── previous.insecure.log │ │ │ │ └── previous.log │ │ │ ├── kibana-9d69668d4-2rkvz.yaml │ │ │ └── kibana-proxy │ │ │ └── kibana-proxy │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ └── route.openshift.io │ └── routes.yaml └── openshift-operators-redhat ├──", "oc adm must-gather --image-stream=openshift/must-gather \\ 1 --image=quay.io/kubevirt/must-gather 2", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc adm must-gather -- gather_network_logs", "tar cvaf must-gather.tar.gz must-gather.local.472290403699006248 1", "oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{\"\\n\"}'", "oc get nodes", "oc debug node/my-cluster-node", "oc new-project dummy", "oc patch namespace dummy --type=merge -p '{\"metadata\": {\"annotations\": { \"scheduler.alpha.kubernetes.io/defaultTolerations\": \"[{\\\"operator\\\": \\\"Exists\\\"}]\"}}}'", "oc debug node/my-cluster-node", "chroot /host", "toolbox", "sos report -k crio.all=on -k crio.logs=on -k podman.all=on -k podman.logs=on 1", "sos report --all-logs", "Your sosreport has been generated and saved in: /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1 The checksum is: 382ffc167510fd71b4f12a4f40b97a4e", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz' > /tmp/sosreport-my-cluster-node-01234567-2020-05-28-eyjknxt.tar.xz 1", "ssh core@<bootstrap_fqdn> journalctl -b -f -u bootkube.service", "ssh core@<bootstrap_fqdn> 'for pod in USD(sudo podman ps -a -q); do sudo podman logs USDpod; done'", "oc adm node-logs --role=master -u kubelet 1", "oc adm node-logs --role=master --path=openshift-apiserver", "oc adm node-logs --role=master --path=openshift-apiserver/audit.log", "ssh core@<master-node>.<cluster_name>.<base_domain> sudo tail -f /var/log/openshift-apiserver/audit.log", "oc adm must-gather --dest-dir /tmp/captures \\ <.> --source-dir '/tmp/tcpdump/' \\ <.> --image registry.redhat.io/openshift4/network-tools-rhel8:latest \\ <.> --node-selector 'node-role.kubernetes.io/worker' \\ <.> --host-network=true \\ <.> --timeout 30s \\ <.> -- tcpdump -i any \\ <.> -w /tmp/tcpdump/%Y-%m-%dT%H:%M:%S.pcap -W 1 -G 300", "tmp/captures ├── event-filter.html ├── ip-10-0-192-217-ec2-internal 1 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:31.pcap ├── ip-10-0-201-178-ec2-internal 2 │ └── registry-redhat-io-openshift4-network-tools-rhel8-sha256-bca │ └── 2022-01-13T19:31:30.pcap ├── ip- └── timestamp", "oc get nodes", "oc debug node/my-cluster-node", "chroot /host", "ip ad", "toolbox", "tcpdump -nn -s 0 -i ens5 -w /host/var/tmp/my-cluster-node_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1", "chroot /host crictl ps", "chroot /host crictl inspect --output yaml a7fe32346b120 | grep 'pid' | awk '{print USD2}'", "nsenter -n -t 49628 -- tcpdump -nn -i ens5 -w /host/var/tmp/my-cluster-node-my-container_USD(date +%d_%m_%Y-%H_%M_%S-%Z).pcap 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-tcpdump-capture-file.pcap' > /tmp/my-tcpdump-capture-file.pcap 1", "oc debug node/my-cluster-node -- bash -c 'cat /host/var/tmp/my-diagnostic-data.tar.gz' > /var/tmp/my-diagnostic-data.tar.gz 1", "chroot /host", "toolbox", "dnf install -y <package_name>", "chroot /host", "vi ~/.toolboxrc", "REGISTRY=quay.io 1 IMAGE=fedora/fedora:33-x86_64 2 TOOLBOX_NAME=toolbox-fedora-33 3", "toolbox" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/support/gathering-cluster-data
Chapter 19. Profiling CPU usage in real time with perf top
Chapter 19. Profiling CPU usage in real time with perf top You can use the perf top command to measure CPU usage of different functions in real time. Prerequisites You have the perf user space tool installed as described in Installing perf . 19.1. The purpose of perf top The perf top command is used for real time system profiling and functions similarly to the top utility. However, where the top utility generally shows you how much CPU time a given process or thread is using, perf top shows you how much CPU time each specific function uses. In its default state, perf top tells you about functions being used across all CPUs in both the user-space and the kernel-space. To use perf top you need root access. 19.2. Profiling CPU usage with perf top This procedure activates perf top and profiles CPU usage in real time. Prerequisites You have the perf user space tool installed as described in Installing perf . You have root access Procedure Start the perf top monitoring interface: The monitoring interface looks similar to the following: In this example, the kernel function do_syscall_64 is using the most CPU time. Additional resources perf-top(1) man page on your system 19.3. Interpretation of perf top output The perf top monitoring interface displays the data in several columns: The "Overhead" column Displays the percent of CPU a given function is using. The "Shared Object" column Displays name of the program or library which is using the function. The "Symbol" column Displays the function name or symbol. Functions executed in the kernel-space are identified by [k] and functions executed in the user-space are identified by [.] . 19.4. Why perf displays some function names as raw function addresses For kernel functions, perf uses the information from the /proc/kallsyms file to map the samples to their respective function names or symbols. For functions executed in the user space, however, you might see raw function addresses because the binary is stripped. The debuginfo package of the executable must be installed or, if the executable is a locally developed application, the application must be compiled with debugging information turned on (the -g option in GCC) to display the function names or symbols in such a situation. Note It is not necessary to re-run the perf record command after installing the debuginfo associated with an executable. Simply re-run the perf report command. Additional Resources Enabling debugging with debugging information 19.5. Enabling debug and source repositories A standard installation of Red Hat Enterprise Linux does not enable the debug and source repositories. These repositories contain information needed to debug the system components and measure their performance. Procedure Enable the source and debug information package channels: The USD(uname -i) part is automatically replaced with a matching value for architecture of your system: Architecture name Value 64-bit Intel and AMD x86_64 64-bit ARM aarch64 IBM POWER ppc64le 64-bit IBM Z s390x 19.6. Getting debuginfo packages for an application or library using GDB Debugging information is required to debug code. For code that is installed from a package, the GNU Debugger (GDB) automatically recognizes missing debug information, resolves the package name and provides concrete advice on how to get the package. Prerequisites The application or library you want to debug must be installed on the system. GDB and the debuginfo-install tool must be installed on the system. For details, see Setting up to debug applications . Repositories providing debuginfo and debugsource packages must be configured and enabled on the system. For details, see Enabling debug and source repositories . Procedure Start GDB attached to the application or library you want to debug. GDB automatically recognizes missing debugging information and suggests a command to run. Exit GDB: type q and confirm with Enter . Run the command suggested by GDB to install the required debuginfo packages: The dnf package management tool provides a summary of the changes, asks for confirmation and once you confirm, downloads and installs all the necessary files. In case GDB is not able to suggest the debuginfo package, follow the procedure described in Getting debuginfo packages for an application or library manually . Additional resources How can I download or install debuginfo packages for RHEL systems? (Red Hat Knowledgebase)
[ "perf top", "Samples: 8K of event 'cycles', 2000 Hz, Event count (approx.): 4579432780 lost: 0/0 drop: 0/0 Overhead Shared Object Symbol 2.20% [kernel] [k] do_syscall_64 2.17% [kernel] [k] module_get_kallsym 1.49% [kernel] [k] copy_user_enhanced_fast_string 1.37% libpthread-2.29.so [.] pthread_mutex_lock 1.31% [unknown] [.] 0000000000000000 1.07% [kernel] [k] psi_task_change 1.04% [kernel] [k] switch_mm_irqs_off 0.94% [kernel] [k] fget 0.74% [kernel] [k] entry_SYSCALL_64 0.69% [kernel] [k] syscall_return_via_sysret 0.69% libxul.so [.] 0x000000000113f9b0 0.67% [kernel] [k] kallsyms_expand_symbol.constprop.0 0.65% firefox [.] moz_xmalloc 0.65% libpthread-2.29.so [.] __pthread_mutex_unlock_usercnt 0.60% firefox [.] free 0.60% libxul.so [.] 0x000000000241d1cd 0.60% [kernel] [k] do_sys_poll 0.58% [kernel] [k] menu_select 0.56% [kernel] [k] _raw_spin_lock_irqsave 0.55% perf [.] 0x00000000002ae0f3", "subscription-manager repos --enable rhel-8-for-USD(uname -i)-baseos-debug-rpms subscription-manager repos --enable rhel-8-for-USD(uname -i)-baseos-source-rpms subscription-manager repos --enable rhel-8-for-USD(uname -i)-appstream-debug-rpms subscription-manager repos --enable rhel-8-for-USD(uname -i)-appstream-source-rpms", "gdb -q /bin/ls Reading symbols from /bin/ls...Reading symbols from .gnu_debugdata for /usr/bin/ls...(no debugging symbols found)...done. (no debugging symbols found)...done. Missing separate debuginfos, use: dnf debuginfo-install coreutils-8.30-6.el8.x86_64 (gdb)", "(gdb) q", "dnf debuginfo-install coreutils-8.30-6.el8.x86_64" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/profiling-cpu-usage-in-real-time-with-top_monitoring-and-managing-system-status-and-performance
Chapter 3. Managing user accounts in the web console
Chapter 3. Managing user accounts in the web console The the web console offers an interface for adding, editing, and removing system user accounts. After reading this section, you will know: From where the existing accounts come from. How to add new accounts. How to set password expiration. How and when to terminate user sessions. 3.1. Prerequisites Being logged into the the web console with an account that has administrator permissions assigned. For details, see Logging in to the RHEL web console . 3.2. System user accounts managed in the web console With user accounts displayed in the web console you can: Authenticate users when accessing the system. Set them access rights to the system. The web console displays all user accounts located in the system. Therefore, you can see at least one user account just after the first login to the web console. Ones you are logged in to the web console, you can: Create new users accounts. Change their parameters. Lock accounts. Terminate the user session. You can find the account management in the Accounts settings. 3.3. Adding new accounts in the web console The following describes adding system user accounts in the web console and setting administration rights to the accounts. Procedure Log in to the RHEL web console. Click Accounts . Click Create New Account . In the Full Name field, enter the full name of the user. The RHEL web console automatically suggests a user name from the full name and fills it in the User Name field. If you do not want to use the original naming convention consisting of the first letter of the first name and the whole surname, update the suggestion. In the Password/Confirm fields, enter the password and retype it for verification that your password is correct. The color bar placed below the fields shows you security level of the entered password, which does not allow you to create a user with a weak password. Click Create to save the settings and close the dialog box. Select the newly created account. Select Server Administrator in the Roles item. Now you can see the new account in the Accounts settings and you can use the credentials to connect to the system. 3.4. Enforcing password expiration in the web console By default, user accounts have set passwords to never expire. To enforce password expiration, as administrator, set system passwords to expire after a defined number of days. When the password expires, the login attempt will prompt for a password change. Procedure Log in to the RHEL web console interface. Click Accounts . Select the user account for which to enforce password expiration. In the user account settings, click Never expire password . In the Password Expiration dialog box, select Require password change every ... days and enter a positive whole number representing the number of days when the password expires. Click Change . To verify the settings, open the account settings. The web console displays a link with the date of expiration. 3.5. Terminating user sessions in the web console A user creates user sessions when logging into the system. Terminating user sessions means to log the user out from the system. It can be helpful if you need to perform administrative tasks sensitive to configuration changes, for example, system upgrades. In each user account in the RHEL web console, you can terminate all sessions for the account except for the web console session you are currently using. This prevents you from cutting yourself off the system. Procedure Log in to the RHEL web console. Click Accounts . Click the user account for which you want to terminate the session. Click the Terminate Session button. If the Terminate Session button is inactive, the user is not logged in the system. The RHEL web console terminates the sessions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/managing_systems_using_the_rhel_7_web_console/managing-user-accounts-in-the-web-console_system-management-using-the-rhel-7-web-console
7.2. Common Replication Scenarios
7.2. Common Replication Scenarios Decide how the updates flow from server to server and how the servers interact when propagating updates. There are the four basic scenarios and a few strategies for deciding the method appropriate for the environment. These basic scenarios can be combined to build the replication topology that best suits the network environment. Section 7.2.1, "Single-Supplier Replication" Section 7.2.2, "Multi-Supplier Replication" Section 7.2.3, "Cascading Replication" Section 7.2.4, "Mixed Environments" 7.2.1. Single-Supplier Replication In the most basic replication configuration, a supplier server copies a replica directly to one or more consumer servers. In this configuration, all directory modifications occur on the read-write replica on the supplier server, and the consumer servers contain read-only replicas of the data. The supplier server must perform all modifications to the read-write replicas stored on the consumer servers. This is illustrated below. Figure 7.1. Single-Supplier Replication The supplier server can replicate a read-write replica to several consumer servers. The total number of consumer servers that a single supplier server can manage depends on the speed of the networks and the total number of entries that are modified on a daily basis. However, a supplier server is capable of maintaining several consumer servers. 7.2.2. Multi-Supplier Replication In a multi-supplier replication environment, main copies of the same information can exist on multiple servers. This means that data can be updated simultaneously in different locations. The changes that occur on each server are replicated to the other servers. This means that each server functions as both a supplier and a consumer. Note Red Hat Directory Server supports a maximum of 20 supplier servers in any replication environment, as well as an unlimited number of hub suppliers. The number of consumer servers that hold the read-only replicas is unlimited. When the same data is modified on multiple servers, there is a conflict resolution procedure to determine which change is kept. The Directory Server considers the valid change to be the most recent one. Multiple servers can have main copies of the same data, but, within the scope of a single replication agreement, there is only one supplier server and one consumer. Consequently, to create a multi-supplier environment between two supplier servers that share responsibility for the same data, create more than one replication agreement. Figure 7.2. Simplified Multi-Supplier Replication Configuration In Figure 7.2, "Simplified Multi-Supplier Replication Configuration" , supplier A and supplier B each hold a read-write replica of the same data. Figure 7.3, "Replication Traffic in a Simple Multi-Supplier Environment" illustrates the replication traffic with two suppliers (read-write replicas in the illustration), and two consumers (read-only replicas in the illustration). The consumers can be updated by both suppliers. The supplier servers ensure that the changes do not collide. Figure 7.3. Replication Traffic in a Simple Multi-Supplier Environment Replication in Directory Server can support as many as 20 suppliers, which all share responsibility for the same data. Using that many suppliers requires creating a range of replication agreements. (Also remember that in multi-supplier replication, each of the suppliers can be configured in different topologies - meaning there can be 20 different directory trees and even schema differences. There are many variables that have a direct impact on the topology selection.) In multi-supplier replication, the suppliers can send updates to all other suppliers or to some subset of other suppliers. Sending updates to all other suppliers means that changes are propogated faster and the overall scenario has much better failure-tolerance. However, it also increases the complexity of configuring suppliers and introduces high network demand and high server demand. Sending updates to a subset of suppliers is much simpler to configure and reduces the network and server loads, but there is a risk that data could be lost if there were multiple server failures. Figure 7.4, "Multi-Supplier Replication Configuration A" illustrates a fully connected mesh topology where four supplier servers feed data to the other three supplier servers (which also function as consumers). A total of twelve replication agreements exist between the four supplier servers. Figure 7.4. Multi-Supplier Replication Configuration A Figure 7.5, "Multi-Supplier Replication Configuration B" illustrates a topology where each supplier server feeds data to two other supplier servers (which also function as consumers). Only eight replication agreements exist between the four supplier servers, compared to the twelve agreements shown for the topology in Figure 7.4, "Multi-Supplier Replication Configuration A" . This topology is beneficial where the possibility of two or more servers failing at the same time is negligible. Figure 7.5. Multi-Supplier Replication Configuration B Those two examples are simplified multi-supplier scenarios. Since Red Hat Directory Server can have as many as 20 suppliers and an unlimited number of hub suppliers in a single multi-supplier environment, the replication topology can become much more complex. For example, Figure 7.4, "Multi-Supplier Replication Configuration A" has 12 replication agreements (four suppliers with three agreements each). If there are 20 suppliers, then there are 380 replication agreements (20 servers with 19 agreements each). WHen planning multi-supplier replication, consider: How many suppliers there will be What their geographic locations are The path the suppliers will use to update servers in other locations The topologies, directory trees, and schemas of the different suppliers The network quality The server load and performance The update interval required for directory data 7.2.3. Cascading Replication In a cascading replication scenario, a hub supplier receives updates from a supplier server and replays those updates on consumer servers. The hub supplier is a hybrid; it holds a read-only replica, like a typical consumer server, and it also maintains a changelog like a typical supplier server. Hub suppliers forward supplier data as they receive it from the original suppliers. Similarly, when a hub supplier receives an update request from a directory client, it refers the client to the supplier server. Cascading replication is useful if some of the network connections between various locations in the organization are better than others. For example, Example Corp. keeps the main copy of its directory data in Minneapolis, and the consumer servers in New York and Chicago. The network connection between Minneapolis and New York is very good, but the connection between Minneapolis and Chicago is poor. Since the network between New York and Chicago is fair, Example administrators use cascading replication to move directory data from Minneapolis to New York to Chicago. Figure 7.6. Cascading Replication Scenario Figure 7.7, "Replication Traffic and Changelogs in Cascading Replication" illustrates the same scenario from a different perspective, which shows how the replicas are configured on each server (read-write or read-only), and which servers maintain a changelog. Figure 7.7. Replication Traffic and Changelogs in Cascading Replication 7.2.4. Mixed Environments Any of the replication scenarios can be combined to suit the needs of the network and directory environment. One common combination is to use a multi-supplier configuration with a cascading configuration. Figure 7.8. Combined Multi-Supplier and Cascading Replication
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_the_Replication_Process-Common_Replication_Scenarios
5.4.16.8.2. The warn RAID Fault Policy
5.4.16.8.2. The warn RAID Fault Policy In the following example, the raid_fault_policy field has been set to warn in the lvm.conf file. The RAID logical volume is laid out as follows. If the /dev/sdh device fails, the system log will display error messages. In this case, however, LVM will not automatically attempt to repair the RAID device by replacing one of the images. Instead, if the device has failed you can replace the device with the --repair argument of the lvconvert command, as shown below. Note that even though the failed device has been replaced, the display still indicates that LVM could not find the failed device. This is because, although the failed device has been removed from the RAID logical volume, the failed device has not yet been removed from the volume group. To remove the failed device from the volume group, you can execute vgreduce --removemissing VG . If the device failure is a transient failure or you are able to repair the device that failed, as of Red Hat Enterprise Linux release 6.5 you can initiate recovery of the failed device with the --refresh option of the lvchange command. Previously it was necessary to deactivate and then activate the logical volume. The following command refreshes a logical volume.
[ "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdh1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sdh1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvconvert --repair my_vg/my_lv /dev/sdh1: read failed after 0 of 2048 at 250994294784: Input/output error /dev/sdh1: read failed after 0 of 2048 at 250994376704: Input/output error /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error /dev/sdh1: read failed after 0 of 2048 at 4096: Input/output error Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF. Attempt to replace failed RAID images (requires full device resync)? [y/n]: y lvs -a -o name,copy_percent,devices my_vg Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF. LV Copy% Devices my_lv 64.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvchange --refresh my_vg/my_lv" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/raid-warn-faultpolicy
1.4. Image Builder system requirements
1.4. Image Builder system requirements The lorax tool underlying Image Builder performs a number of potentially insecure and unsafe actions while creating the system images. For this reason, use a virtual machine to run Image Builder. The environment where Image Builder runs must meet requirements listed in the following table. Table 1.2. Image Builder system requirements Parameter Minimal Required Value System type A dedicated virtual machine Processor 2 cores Memory 4 GiB Disk space 20 GiB Access privileges Administrator level (root) Network Connectivity to Internet Note Creating images on virtual machine directly installed on UEFI systems is not supported.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/sect-documentation-image_builder-test_chapter-test_section_4
Chapter 2. Red Hat Ansible Automation Platform 2.3
Chapter 2. Red Hat Ansible Automation Platform 2.3 This release includes a number of enhancements, additions, and fixes that have been implemented in the Red Hat Ansible Automation Platform. 2.1. Ansible Automation Platform 2.3 Red Hat Ansible Automation Platform simplifies the development and operation of automation workloads for managing enterprise application infrastructure lifecycles. It works across multiple IT domains including operations, networking, security, and development, as well as across diverse hybrid environments. Simple to adopt, use, and understand, Red Hat Ansible Automation Platform provides the tools needed to rapidly implement enterprise-wide automation, no matter where you are in your automation journey. 2.1.1. Enhancements Fixed a race condition where the UI would not properly populate upon launch. Fixed an issue where puppet managed files are not handled properly by the set-up script. Fixed an issue where self-signed certs were being recreated on every run of setup.sh . Added an option for execution environment images to be pulled from Hub only. Fixed an issue where the bundled installer was failing when DNF was trying to fetch gpg keys remotely. Upgraded pulp_installer to 3.20.5+ . Implemented sidecar documentation to allow easy documentation of filter and test plugins, as well as documentation for non-python modules without requiring a .py file for documentation. Migrated display for stdout and stderr from the display class to proxy over the queue for dispatch in the main process to improve reliability of displaying information to the terminal. Moved handler processing into the configured strategy, so that handlers operate within the configured strategy, instead of using a non-configurable linear like execution of handlers. Updated internal FieldAttribute classes to act as Python data descriptors to reduce code complexity and use of metaclasses. Fixed an issue when ansible-runner was not properly removed from hybrid nodes when upgrading to Ansible Automation Platform 2.2. 2.1.2. Technology preview features Some features in this release are currently classified as Technology Preview. Technology Preview features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. Note that Red Hat does not recommend using technology preview features for production, and Red Hat SLAs do not support technology preview functions. The following are the technology preview features: Added the ability to use external execution nodes when running Ansible Automation Platform as a managed service in Azure. Added the ability to use external execution nodes when running the Ansible Automation Platform Operator in Openshift. Other noteworthy developer tooling updates include the following: Added new pre-flight checks to ansible-core CLI start up to enforce assumptions made about handling display and text encoding. Added official support for Python 3.11 to ansible-core CLIs and target node execution. Dropped Python 3.8 support for ansible-core CLIs and controller side code. Added lint profile support for content pipelines. Additional resources For the most recent list of technology preview features, see Ansible Automation Platform - Preview Features . For more information about support for technology preview features, see Red Hat Technology Preview Features Support Scope . For information regarding execution node enhancements on Openshift deployments, see Managing Capacity With Instances . 2.2. Automation Hub Automation Hub allows you to discover and utilize new certified automation content from Red Hat Ansible and Certified Partners. On Ansible Automation Hub, you can discover and manage Ansible Collections, which is supported automation content developed by both partners and Red Hat for use cases such as cloud automation, network automation, security automation, and more. 2.2.1. Enhancements Adopted the new pulp RBAC system. Added a configurable automatic logout time. Set a minimum password length for internal users. Added the capability to configure LDAP with private automation hub. Added visibility for execution environments created by ansible-builder in the automation hub UI. Fixed an error when navigating to a non-existent group URL. Fixed an issue where roles could not be created through the UI. Fixed an issue with Hub installation during the collect static content task. Fixed an issue when a 500 error would populate when listing roles on a group. Fixed an issue when imports contained more than 100 namespaces. Fixed an issue where filters were not working correctly when searching for execution environments. Fixed an issue where certified content would display incorrectly in private automation hub when synced. Fixed an issue where group_admin users could not view groups. Fixed an issue where pressing the enter key would reload a form instead of submitting. Fixed an issue with broken links on community collection dependencies. Fixed an issue with roles not showing up on a group access page. Fixed some issues with how roles were displayed on the groups page. Updated so that only admins can change the superuser status on users. Updated so that the screen no longer hangs when attempting to edit a group with unknown permissions. Updated the installer to use a custom repo that automation hub will add to show validated content. Updated the pulp_ansible package to 0.15.x. Updated the pulp_container package to 2.14. Upgraded pulpcore to 3.21.x. Fixed problem in which the released date for collections in private automation hub was the same as the released date for that collection and its versions in the console.redhat.com automation hub. Deprecated the pulp_firewalld_zone parameter, replacing it with the automationhub_firewalld_zone parameter. 2.3. Automation Controller Automation controller replaces Ansible Tower. Automation controller introduces a distributed, modular architecture with a decoupled control and execution plane. The name change reflects these enhancements and the overall position within the Ansible Automation Platform suite. Automation controller provides a standardized way to define, operate and delegate automation across the enterprise. It also introduces new, exciting technologies and an enhanced architecture that enables automation teams to scale and deliver automation rapidly to meet ever-growing business demand. 2.3.1. Enhancements Fixed the Ansible Galaxy Credential to no longer be automatically created or added to organizations after removing it manually. Fixed an issue where warnings were being unnecessarily displayed. Included updates and enhancements to task manager for scaling jobs, mesh, and cluster size to improve performance. Included reaper and periodic task improvements for scaling the mesh and jobs, which improve performance. Fixed an issue with webhook notifications not triggering for some job template runs. Fixed a race condition where the UI would not properly populate upon launch. Added UI support for filtering single select survey question answers when configuring a job. Fixed an issue where execution environments were failing to be pushed locally during installation. Fixed an issue where inventory could not be selected in workflows even if the user has admin permissions on the workflow. Introduced a content signing utility through the Command Line Interface called ansible-sign that provides options for the user to sign and verify whether the project is signed. Added project or playbook signature verification functionality to controller, enabling users to supply a GPG key and add a content signing credential to a project. This automatically enables content signing for said project. See Automation Controller Release Notes for 4.x for a full list of new features and enhancements. 2.4. Automation Platform Operator Ansible Automation Platform Operator provides cloud-native, push-button deployment of new Ansible Automation Platform instances in your OpenShift environment. 2.4.1. Enhancements Fixed an issue where the pulp resource manager was not removed on upgrade from Automation Platform Operator 2.1 to Automation Platform Operator 2.2. 2.5. Ansible Automation Platform Documentation The documentation set for Red Hat Ansible Automation Platform 2.3 has been refactored to improve the experience for our customers and the Ansible community. These changes will make it easier for you to install, migrate, backup, recover and implement new features. 2.5.1. Enhancements The Red Hat Ansible Automation Platform Installation Guide has been restructured into three separate documents to include the following: Red Hat Ansible Automation Platform Planning Guide Use this guide to understand requirements, options, and recommendations for installing Ansible Automation Platform. Red Hat Ansible Automation Platform Installation Guide Use this guide to learn how to install Ansible Automation Platform based on supported installation scenarios. Red Hat Ansible Automation Platform Operations Guide Use this guide for guidance on post installation activities for the Ansible Automation Platform. The Red Hat Ansible Automation Platform Operator Installation Guide has been renamed to Deploying the Red Hat Ansible Automation Platform operator on OpenShift Container Platform . The document has also been updated to include the following: Migration procedures, so you can migrate your existing Ansible Automation Platform deployment to Ansible Automation Platform Operator. Upgrade procedures so you can upgrade to the latest available version of the Ansible Automation Platform Operator. The Red Hat Ansible Automation Platform Operator Backup and Recovery Guide has been added to the library to help you backup and recover installations of the Red Hat Ansible Automation Platform operator on OpenShift Container Platform. The Ansible Builder Guide has been renamed to Creating and Consuming Execution Environments to better reflect the information provided in the guide.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_release_notes/anchor-aap_2.3-release
Chapter 23. Viewing and adding comments to a task
Chapter 23. Viewing and adding comments to a task You can add comments to a task and also view the existing comments of a task in Business Central. Procedure In Business Central, go to Menu Track Task Inbox . On the Task Inbox page, click the task to open it. On the task page, click the Work tab or the Comments tab. In the Comment field, enter the task related comment and click Add Comment icon. All task related comments are displayed in a tabular form in the Work as well as Comments tab. Note To select or clear the Show task comments at work tab check box, go to the Business Central home page, click the Settings icon and select the Process Administration option. Only users with the admin role have access to enable or disable this feature.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_process_services_in_red_hat_process_automation_manager/interacting-with-processes-viewing-adding-comments-proc
Chapter 4. Network considerations
Chapter 4. Network considerations Review the strategies for redirecting your application network traffic after migration. 4.1. DNS considerations The DNS domain of the target cluster is different from the domain of the source cluster. By default, applications get FQDNs of the target cluster after migration. To preserve the source DNS domain of migrated applications, select one of the two options described below. 4.1.1. Isolating the DNS domain of the target cluster from the clients You can allow the clients' requests sent to the DNS domain of the source cluster to reach the DNS domain of the target cluster without exposing the target cluster to the clients. Procedure Place an exterior network component, such as an application load balancer or a reverse proxy, between the clients and the target cluster. Update the application FQDN on the source cluster in the DNS server to return the IP address of the exterior network component. Configure the network component to send requests received for the application in the source domain to the load balancer in the target cluster domain. Create a wildcard DNS record for the *.apps.source.example.com domain that points to the IP address of the load balancer of the source cluster. Create a DNS record for each application that points to the IP address of the exterior network component in front of the target cluster. A specific DNS record has higher priority than a wildcard record, so no conflict arises when the application FQDN is resolved. Note The exterior network component must terminate all secure TLS connections. If the connections pass through to the target cluster load balancer, the FQDN of the target application is exposed to the client and certificate errors occur. The applications must not return links referencing the target cluster domain to the clients. Otherwise, parts of the application might not load or work properly. 4.1.2. Setting up the target cluster to accept the source DNS domain You can set up the target cluster to accept requests for a migrated application in the DNS domain of the source cluster. Procedure For both non-secure HTTP access and secure HTTPS access, perform the following steps: Create a route in the target cluster's project that is configured to accept requests addressed to the application's FQDN in the source cluster: USD oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> \ -n <app1-namespace> With this new route in place, the server accepts any request for that FQDN and sends it to the corresponding application pods. In addition, when you migrate the application, another route is created in the target cluster domain. Requests reach the migrated application using either of these hostnames. Create a DNS record with your DNS provider that points the application's FQDN in the source cluster to the IP address of the default load balancer of the target cluster. This will redirect traffic away from your source cluster to your target cluster. The FQDN of the application resolves to the load balancer of the target cluster. The default Ingress Controller router accept requests for that FQDN because a route for that hostname is exposed. For secure HTTPS access, perform the following additional step: Replace the x509 certificate of the default Ingress Controller created during the installation process with a custom certificate. Configure this certificate to include the wildcard DNS domains for both the source and target clusters in the subjectAltName field. The new certificate is valid for securing connections made using either DNS domain. Additional resources See Replacing the default ingress certificate for more information. 4.2. Network traffic redirection strategies After a successful migration, you must redirect network traffic of your stateless applications from the source cluster to the target cluster. The strategies for redirecting network traffic are based on the following assumptions: The application pods are running on both the source and target clusters. Each application has a route that contains the source cluster hostname. The route with the source cluster hostname contains a CA certificate. For HTTPS, the target router CA certificate contains a Subject Alternative Name for the wildcard DNS record of the source cluster. Consider the following strategies and select the one that meets your objectives. Redirecting all network traffic for all applications at the same time Change the wildcard DNS record of the source cluster to point to the target cluster router's virtual IP address (VIP). This strategy is suitable for simple applications or small migrations. Redirecting network traffic for individual applications Create a DNS record for each application with the source cluster hostname pointing to the target cluster router's VIP. This DNS record takes precedence over the source cluster wildcard DNS record. Redirecting network traffic gradually for individual applications Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route a percentage of the traffic to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Gradually increase the percentage of traffic that you route to the target cluster router's VIP until all the network traffic is redirected. User-based redirection of traffic for individual applications Using this strategy, you can filter TCP/IP headers of user requests to redirect network traffic for predefined groups of users. This allows you to test the redirection process on specific populations of users before redirecting the entire network traffic. Create a proxy that can direct traffic to both the source cluster router's VIP and the target cluster router's VIP, for each application. Create a DNS record for each application with the source cluster hostname pointing to the proxy. Configure the proxy entry for the application to route traffic matching a given header pattern, such as test customers , to the target cluster router's VIP and the rest of the traffic to the source cluster router's VIP. Redirect traffic to the target cluster router's VIP in stages until all the traffic is on the target cluster router's VIP.
[ "oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/migrating_from_version_3_to_4/planning-considerations-3-4
Chapter 3. Installing
Chapter 3. Installing 3.1. Preparing your cluster for OpenShift Virtualization Review this section before you install OpenShift Virtualization to ensure that your cluster meets the requirements. 3.1.1. Supported platforms You can use the following platforms with OpenShift Virtualization: Amazon Web Services bare metal instances. 3.1.1.1. OpenShift Virtualization on Red Hat OpenShift Service on AWS You can run OpenShift Virtualization on a Red Hat OpenShift Service on AWS (ROSA) Classic cluster. Before you set up your cluster, review the following summary of supported features and limitations: Installing You can install the cluster by using installer-provisioned infrastructure, ensuring that you specify bare-metal instance types for the worker nodes. For example, you can use the c5n.metal type value for a machine based on x86_64 architecture. For more information, see the Red Hat OpenShift Service on AWS documentation about installing on AWS. Accessing virtual machines (VMs) There is no change to how you access VMs by using the virtctl CLI tool or the Red Hat OpenShift Service on AWS web console. You can expose VMs by using a NodePort or LoadBalancer service. The load balancer approach is preferable because Red Hat OpenShift Service on AWS automatically creates the load balancer in AWS and manages its lifecycle. A security group is also created for the load balancer, and you can use annotations to attach existing security groups. When you remove the service, Red Hat OpenShift Service on AWS removes the load balancer and its associated resources. Networking If your application requires a flat layer 2 network or control over the IP pool, consider using OVN-Kubernetes secondary overlay networks. Storage You can use any storage solution that is certified by the storage vendor to work with the underlying platform. Important AWS bare-metal and ROSA clusters might have different supported storage solutions. Ensure that you confirm support with your storage vendor. Using Amazon Elastic File System (EFS) or Amazon Elastic Block Store (EBS) with OpenShift Virtualization might cause performance and functionality limitations as shown in the following table: Table 3.1. EFS and EBS performance and functionality limitations Feature EBS volume EFS volume Shared storage solutions gp2 gp3 io2 VM live migration Not available Not available Available Available Available Fast VM creation by using cloning Available Not available Available VM backup and restore by using snapshots Available Not available Available Consider using CSI storage, which supports ReadWriteMany (RWX), cloning, and snapshots to enable live migration, fast VM creation, and VM snapshots capabilities. Additional resources Connecting a virtual machine to an OVN-Kubernetes secondary network Exposing a virtual machine by using a service 3.1.2. Hardware and operating system requirements Review the following hardware and operating system requirements for OpenShift Virtualization. 3.1.2.1. CPU requirements Supported by Red Hat Enterprise Linux (RHEL) 9. See Red Hat Ecosystem Catalog for supported CPUs. Note If your worker nodes have different CPUs, live migration failures might occur because different CPUs have different capabilities. You can mitigate this issue by ensuring that your worker nodes have CPUs with the appropriate capacity and by configuring node affinity rules for your virtual machines. See Configuring a required node affinity rule for details. Support for AMD and Intel 64-bit architectures (x86-64-v2). Support for Intel 64 or AMD64 CPU extensions. Intel VT or AMD-V hardware virtualization extensions enabled. NX (no execute) flag enabled. 3.1.2.2. Operating system requirements Red Hat Enterprise Linux CoreOS (RHCOS) installed on worker nodes. 3.1.2.3. Storage requirements Supported by Red Hat OpenShift Service on AWS. If the storage provisioner supports snapshots, you must associate a VolumeSnapshotClass object with the default storage class. 3.1.2.3.1. About volume and access modes for virtual machine disks If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. For best results, use the ReadWriteMany (RWX) access mode and the Block volume mode. This is important for the following reasons: ReadWriteMany (RWX) access mode is required for live migration. The Block volume mode performs significantly better than the Filesystem volume mode. This is because the Filesystem volume mode uses more storage layers, including a file system layer and a disk image file. These layers are not necessary for VM disk storage. Important You cannot live migrate virtual machines with the following configurations: Storage volume with ReadWriteOnce (RWO) access mode Passthrough features such as GPUs Set the evictionStrategy field to None for these virtual machines. The None strategy powers down VMs during node reboots. 3.1.3. Live migration requirements Shared storage with ReadWriteMany (RWX) access mode. Sufficient RAM and network bandwidth. Note You must ensure that there is enough memory request capacity in the cluster to support node drains that result in live migrations. You can determine the approximate required spare memory by using the following calculation: The default number of migrations that can run in parallel in the cluster is 5. If the virtual machine uses a host model CPU, the nodes must support the virtual machine's host model CPU. A dedicated Multus network for live migration is highly recommended. A dedicated network minimizes the effects of network saturation on tenant workloads during migration. 3.1.4. Physical resource overhead requirements OpenShift Virtualization is an add-on to Red Hat OpenShift Service on AWS and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the Red Hat OpenShift Service on AWS requirements. Oversubscribing the physical resources in a cluster can affect performance. Important The numbers noted in this documentation are based on Red Hat's test methodology and setup. These numbers can vary based on your own individual setup and environments. Memory overhead Calculate the memory overhead values for OpenShift Virtualization by using the equations below. Cluster memory overhead Additionally, OpenShift Virtualization environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes. Virtual machine memory overhead 1 Required for the processes that run in the virt-launcher pod. 2 Number of virtual CPUs requested by the virtual machine. 3 Number of virtual graphics cards requested by the virtual machine. 4 Additional memory overhead: If your environment includes a Single Root I/O Virtualization (SR-IOV) network device or a Graphics Processing Unit (GPU), allocate 1 GiB additional memory overhead for each device. If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB. If Trusted Platform Module (TPM) is enabled, add 53 MiB. CPU overhead Calculate the cluster processor overhead requirements for OpenShift Virtualization by using the equation below. The CPU overhead per virtual machine depends on your individual setup. Cluster CPU overhead OpenShift Virtualization increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes. Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for OpenShift Virtualization management workloads in addition to the CPUs required for virtual machine workloads. Virtual machine CPU overhead If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires. Storage overhead Use the guidelines below to estimate storage overhead requirements for your OpenShift Virtualization environment. Cluster storage overhead 10 GiB is the estimated on-disk storage impact for each node in the cluster when you install OpenShift Virtualization. Virtual machine storage overhead Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. OpenShift Virtualization does not currently allocate any additional ephemeral storage for the running container itself. Example As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores. Additional resources Glossary of common terms for Red Hat OpenShift Service on AWS storage 3.2. Installing OpenShift Virtualization Install OpenShift Virtualization to add virtualization functionality to your Red Hat OpenShift Service on AWS cluster. 3.2.1. Installing the OpenShift Virtualization Operator Install the OpenShift Virtualization Operator by using the Red Hat OpenShift Service on AWS web console or the command line. 3.2.1.1. Installing the OpenShift Virtualization Operator by using the web console You can deploy the OpenShift Virtualization Operator by using the Red Hat OpenShift Service on AWS web console. Prerequisites Install Red Hat OpenShift Service on AWS 4 on your cluster. Log in to the Red Hat OpenShift Service on AWS web console as a user with cluster-admin permissions. Create a machine pool based on a bare metal compute node instance type. For more information, see "Creating a machine pool" in the Additional resources of this section. Procedure From the Administrator perspective, click Operators OperatorHub . In the Filter by keyword field, type Virtualization . Select the OpenShift Virtualization Operator tile with the Red Hat source label. Read the information about the Operator and click Install . On the Install Operator page: Select stable from the list of available Update Channel options. This ensures that you install the version of OpenShift Virtualization that is compatible with your Red Hat OpenShift Service on AWS version. For Installed Namespace , ensure that the Operator recommended namespace option is selected. This installs the Operator in the mandatory openshift-cnv namespace, which is automatically created if it does not exist. Warning Attempting to install the OpenShift Virtualization Operator in a namespace other than openshift-cnv causes the installation to fail. For Approval Strategy , it is highly recommended that you select Automatic , which is the default value, so that OpenShift Virtualization automatically updates when a new version is available in the stable update channel. While it is possible to select the Manual approval strategy, this is inadvisable because of the high risk that it presents to the supportability and functionality of your cluster. Only select Manual if you fully understand these risks and cannot use Automatic . Warning Because OpenShift Virtualization is only supported when used with the corresponding Red Hat OpenShift Service on AWS version, missing OpenShift Virtualization updates can cause your cluster to become unsupported. Click Install to make the Operator available to the openshift-cnv namespace. When the Operator installs successfully, click Create HyperConverged . Optional: Configure Infra and Workloads node placement options for OpenShift Virtualization components. Click Create to launch OpenShift Virtualization. Verification Navigate to the Workloads Pods page and monitor the OpenShift Virtualization pods until they are all Running . After all the pods display the Running state, you can use OpenShift Virtualization. Additional resources Creating a machine pool 3.2.1.2. Installing the OpenShift Virtualization Operator by using the command line Subscribe to the OpenShift Virtualization catalog and install the OpenShift Virtualization Operator by applying manifests to your cluster. 3.2.1.2.1. Subscribing to the OpenShift Virtualization catalog by using the CLI Before you install OpenShift Virtualization, you must subscribe to the OpenShift Virtualization catalog. Subscribing gives the openshift-cnv namespace access to the OpenShift Virtualization Operators. To subscribe, configure Namespace , OperatorGroup , and Subscription objects by applying a single manifest to your cluster. Prerequisites Install Red Hat OpenShift Service on AWS 4 on your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create the required Namespace , OperatorGroup , and Subscription objects for OpenShift Virtualization by running the following command: USD oc apply -f <file name>.yaml Note You can configure certificate rotation parameters in the YAML file. 3.2.1.2.2. Deploying the OpenShift Virtualization Operator by using the CLI You can deploy the OpenShift Virtualization Operator by using the oc CLI. Prerequisites Subscribe to the OpenShift Virtualization catalog in the openshift-cnv namespace. Log in as a user with cluster-admin privileges. Create a machine pool based on a bare metal compute node instance type. Procedure Create a YAML file that contains the following manifest: apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec: Deploy the OpenShift Virtualization Operator by running the following command: USD oc apply -f <file_name>.yaml Verification Ensure that OpenShift Virtualization deployed successfully by watching the PHASE of the cluster service version (CSV) in the openshift-cnv namespace. Run the following command: USD watch oc get csv -n openshift-cnv The following output displays if deployment was successful: Example output NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.18.0 OpenShift Virtualization 4.18.0 Succeeded Additional resources Creating a machine pool 3.2.2. steps The hostpath provisioner is a local storage provisioner designed for OpenShift Virtualization. If you want to configure local storage for virtual machines, you must enable the hostpath provisioner first. 3.3. Uninstalling OpenShift Virtualization You uninstall OpenShift Virtualization by using the web console or the command line interface (CLI) to delete the OpenShift Virtualization workloads, the Operator, and its resources. 3.3.1. Uninstalling OpenShift Virtualization by using the web console You uninstall OpenShift Virtualization by using the web console to perform the following tasks: Delete the HyperConverged CR . Delete the OpenShift Virtualization Operator . Delete the openshift-cnv namespace . Delete the OpenShift Virtualization custom resource definitions (CRDs) . Important You must first delete all virtual machines , and virtual machine instances . You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster. 3.3.1.1. Deleting the HyperConverged custom resource To uninstall OpenShift Virtualization, you first delete the HyperConverged custom resource (CR). Prerequisites You have access to an Red Hat OpenShift Service on AWS cluster using an account with cluster-admin permissions. Procedure Navigate to the Operators Installed Operators page. Select the OpenShift Virtualization Operator. Click the OpenShift Virtualization Deployment tab. Click the Options menu beside kubevirt-hyperconverged and select Delete HyperConverged . Click Delete in the confirmation window. 3.3.1.2. Deleting Operators from a cluster using the web console Cluster administrators can delete installed Operators from a selected namespace by using the web console. Prerequisites You have access to an Red Hat OpenShift Service on AWS cluster web console using an account with dedicated-admin permissions. Procedure Navigate to the Operators Installed Operators page. Scroll or enter a keyword into the Filter by name field to find the Operator that you want to remove. Then, click on it. On the right side of the Operator Details page, select Uninstall Operator from the Actions list. An Uninstall Operator? dialog box is displayed. Select Uninstall to remove the Operator, Operator deployments, and pods. Following this action, the Operator stops running and no longer receives updates. Note This action does not remove resources managed by the Operator, including custom resource definitions (CRDs) and custom resources (CRs). Dashboards and navigation items enabled by the web console and off-cluster resources that continue to run might need manual clean up. To remove these after uninstalling the Operator, you might need to manually delete the Operator CRDs. 3.3.1.3. Deleting a namespace using the web console You can delete a namespace by using the Red Hat OpenShift Service on AWS web console. Prerequisites You have access to an Red Hat OpenShift Service on AWS cluster using an account with cluster-admin permissions. Procedure Navigate to Administration Namespaces . Locate the namespace that you want to delete in the list of namespaces. On the far right side of the namespace listing, select Delete Namespace from the Options menu . When the Delete Namespace pane opens, enter the name of the namespace that you want to delete in the field. Click Delete . 3.3.1.4. Deleting OpenShift Virtualization custom resource definitions You can delete the OpenShift Virtualization custom resource definitions (CRDs) by using the web console. Prerequisites You have access to an Red Hat OpenShift Service on AWS cluster using an account with cluster-admin permissions. Procedure Navigate to Administration CustomResourceDefinitions . Select the Label filter and enter operators.coreos.com/kubevirt-hyperconverged.openshift-cnv in the Search field to display the OpenShift Virtualization CRDs. Click the Options menu beside each CRD and select Delete CustomResourceDefinition . 3.3.2. Uninstalling OpenShift Virtualization by using the CLI You can uninstall OpenShift Virtualization by using the OpenShift CLI ( oc ). Prerequisites You have access to an Red Hat OpenShift Service on AWS cluster using an account with cluster-admin permissions. You have installed the OpenShift CLI ( oc ). You have deleted all virtual machines and virtual machine instances. You cannot uninstall OpenShift Virtualization while its workloads remain on the cluster. Procedure Delete the HyperConverged custom resource: USD oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv Delete the OpenShift Virtualization Operator subscription: USD oc delete subscription kubevirt-hyperconverged -n openshift-cnv Delete the OpenShift Virtualization ClusterServiceVersion resource: USD oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Delete the OpenShift Virtualization namespace: USD oc delete namespace openshift-cnv List the OpenShift Virtualization custom resource definitions (CRDs) by running the oc delete crd command with the dry-run option: USD oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Example output Delete the CRDs by running the oc delete crd command without the dry-run option: USD oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv Additional resources Deleting virtual machines Deleting virtual machine instances
[ "Product of (Maximum number of nodes that can drain in parallel) and (Highest total VM memory request allocations across nodes)", "Memory overhead per infrastructure node ~ 150 MiB", "Memory overhead per worker node ~ 360 MiB", "Memory overhead per virtual machine ~ (1.002 x requested memory) + 218 MiB \\ 1 + 8 MiB x (number of vCPUs) \\ 2 + 16 MiB x (number of graphics devices) \\ 3 + (additional memory overhead) 4", "CPU overhead for infrastructure nodes ~ 4 cores", "CPU overhead for worker nodes ~ 2 cores + CPU overhead per virtual machine", "Aggregated storage overhead per node ~ 10 GiB", "oc apply -f <file name>.yaml", "apiVersion: hco.kubevirt.io/v1beta1 kind: HyperConverged metadata: name: kubevirt-hyperconverged namespace: openshift-cnv spec:", "oc apply -f <file_name>.yaml", "watch oc get csv -n openshift-cnv", "NAME DISPLAY VERSION REPLACES PHASE kubevirt-hyperconverged-operator.v4.18.0 OpenShift Virtualization 4.18.0 Succeeded", "oc delete HyperConverged kubevirt-hyperconverged -n openshift-cnv", "oc delete subscription kubevirt-hyperconverged -n openshift-cnv", "oc delete csv -n openshift-cnv -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "oc delete namespace openshift-cnv", "oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv", "customresourcedefinition.apiextensions.k8s.io \"cdis.cdi.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hostpathprovisioners.hostpathprovisioner.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"hyperconvergeds.hco.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"kubevirts.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"networkaddonsconfigs.networkaddonsoperator.network.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"ssps.ssp.kubevirt.io\" deleted (dry run) customresourcedefinition.apiextensions.k8s.io \"tektontasks.tektontasks.kubevirt.io\" deleted (dry run)", "oc delete crd -l operators.coreos.com/kubevirt-hyperconverged.openshift-cnv" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/virtualization/installing
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/getting_started_with_red_hat_build_of_quarkus/making-open-source-more-inclusive
Chapter 6. Known issues
Chapter 6. Known issues See Known Issues for JBoss EAP 7.4 to view the list of known issues for this release. 6.1. Changed behaviors for JBoss EAP 7.4 Setting OPENSHIFT_DNS_PING_SERVICE_NAME to an empty value results in boot error Do not set OPENSHIFT_DNS_PING_SERVICE_NAME to an empty value. A boot error occurs and clustering is disabled. Unpredictable web session expiration Previously, JBoss EAP mistakenly recalculated some web session timeouts, and these sessions did not expire as specified in the web application configuration file (such as web.xml , jboss-web.xml or jboss-all.xml ). JBoss EAP no longer performs this mistaken calculation, so web sessions now expire as specified in the application configuration. Memory leaks in distributed JSF applications when caching managed beans in a WebInjectionContainer JBoss EAP cluster membership changes, such as starting or stopping a server, can cause the events that correspond to a given session to resume on a different cluster member than the one that last handled those events. Specifically, org.jboss.as.web.common.WebInjectionContainer caches references to all managed beans and their references so that it can call ManagedReference.release , which causes a memory leak. This issue affects distributed Jakarta Server Faces (JSF) applications that use the JBoss EAP high-availability (HA) server configuration. References to session-scoped beans can persist after the associated HTTP session expires, if a different cluster member handles that expiration. As a workaround, change the distributable-web subsystem like in the following example: Java.lang.NullPointerException error when using ibm-java-1.8 and Bouncy Castle If you're directly or indirectly using the Bouncy Castle provider with IBM JDK 1.8 on JBoss EAP, you might get the following error: To work around this issue, modify your JBoss EAP module.xml structure similarly to that of the WFLY-14688 diff, which you can access in the Additional resources section. Additional resources For more information about working around this issue, see WFLY-14688 diff . For more information about Bouncy Castle cryptography APIs, see bouncycastle.org . Revised on 2024-02-08 08:04:53 UTC
[ "/subsystem=distributable-web/infinispan-session-management=default/affinity=local:add", "Caused by: java.lang.NullPointerException at org.bouncycastle.jcajce.provider.asymmetric.rsa.BCRSAPrivateKey.getAlgorithm(BCRSAPrivateKey.java:79) at com.ibm.crypto.provider.bf.supportsParameter(Unknown Source) at javax.crypto.Cipher.a(Unknown Source) at javax.crypto.Cipher.init(Unknown Source) at javax.crypto.Cipher.init(Unknown Source) at org.bouncycastle.operator.jcajce.JceAsymmetricKeyUnwrapper.generateUnwrappedKey(JceAsymmetricKeyUnwrapper.java:109) at org.bouncycastle.cms.jcajce.JceKeyTransRecipient.extractSecretKey(JceKeyTransRecipient.java:208) at org.bouncycastle.cms.jcajce.JceKeyTransEnvelopedRecipient.getRecipientOperator(JceKeyTransEnvelopedRecipient.java:26) at org.bouncycastle.cms.KeyTransRecipientInformation.getRecipientOperator(KeyTransRecipientInformation.java:48) at org.bouncycastle.cms.RecipientInformation.getContentStream(RecipientInformation.java:169) at org.bouncycastle.cms.RecipientInformation.getContent(RecipientInformation.java:150) at org.jboss.resteasy.security.smime.EnvelopedInputImpl.getEntity(EnvelopedInputImpl.java:168) ... 76 more" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/7.4.0_release_notes/known-issues_default
Chapter 2. Logging 6.1
Chapter 2. Logging 6.1 2.1. Support Only the configuration options described in this documentation are supported for logging. Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged . An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed . Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. Logging is not: A high scale log collection system Security Information and Event Monitoring (SIEM) compliant A "bring your own" (BYO) log collector configuration Historical or long term log retention or storage A guaranteed log sink Secure storage - audit logs are not stored by default 2.1.1. Supported API custom resource definitions The following table describes the supported Logging APIs. Table 2.1. Logging API support states CustomResourceDefinition (CRD) ApiVersion Support state LokiStack lokistack.loki.grafana.com/v1 Supported from 5.5 RulerConfig rulerconfig.loki.grafana/v1 Supported from 5.7 AlertingRule alertingrule.loki.grafana/v1 Supported from 5.7 RecordingRule recordingrule.loki.grafana/v1 Supported from 5.7 LogFileMetricExporter LogFileMetricExporter.logging.openshift.io/v1alpha1 Supported from 5.8 ClusterLogForwarder clusterlogforwarder.observability.openshift.io/v1 Supported from 6.0 2.1.2. Unsupported configurations You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: The collector configuration file The collector daemonset Explicitly unsupported cases include: Configuring the logging collector using environment variables . You cannot use environment variables to modify the log collector. Configuring how the log collector normalizes logs . You cannot modify default log normalization. 2.1.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 2.1.4. Support exception for the Logging UI Plugin Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA. 2.1.5. Collecting logging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. For prompt support, supply diagnostic information for both OpenShift Container Platform and logging. 2.1.5.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. For your logging, must-gather collects the following information: Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level Cluster-level resources, including nodes, roles, and role bindings at the cluster level OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. 2.1.5.2. Collecting logging data You can use the oc adm must-gather CLI command to collect information about logging. Procedure To collect logging information with must-gather : Navigate to the directory where you want to store the must-gather information. Run the oc adm must-gather command against the logging image: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: must-gather.local.4157245944708210408 . Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: USD tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 Attach the compressed file to your support case on the Red Hat Customer Portal . 2.2. Logging 6.1 2.2.1. Logging 6.1.3 Release Notes This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.3 . 2.2.1.1. Bug Fixes Before this update, when using the new 1x.pico size with the Loki Operator, the PodDisruptionBudget created for the Ingester pod allowed Kubernetes to evict two of the three Ingester pods. With this update, the Operator now creates a PodDisruptionBudget that allows eviction of only a single Ingester pod. ( LOG-6693 ) Before this update, the Operator did not support templating of syslog facility and severity level , which was consistent with the rest of the API. Instead, the Operator relied upon the 5.x API, which is no longer supported. With this update, the Operator supports templating by adding the required validation to the API and rejecting resources that do not match the required format. ( LOG-6788 ) Before this update, empty OTEL tuning configuration caused a validation error. With this update, the validation rules allow empty OTEL tuning configurations. ( LOG-6532 ) 2.2.1.2. CVEs CVE-2020-11023 CVE-2024-9287 CVE-2024-12797 2.2.2. Logging 6.1.2 Release Notes This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.2 . 2.2.2.1. New Features and Enhancements This enhancement adds OTel semantic stream labels to the lokiStack output so that you can query logs by using both ViaQ and OTel stream labels. ( LOG-6579 ) 2.2.2.2. Bug Fixes Before this update, the collector alerting rules contained summary and message fields. With this update, the collector alerting rules contain summary and description fields. ( LOG-6126 ) Before this update, the collector metrics dashboard could get removed after an Operator upgrade due to a race condition during the transition from the old to the new pod deployment. With this update, labels are added to the dashboard ConfigMap to identify the upgraded deployment as the current owner so that it will not be removed. ( LOG-6280 ) Before this update, when you included infrastructure namespaces in application inputs, their log_type would be set to application . With this update, the log_type of infrastructure namespaces included in application inputs is set to infrastructure . ( LOG-6373 ) Before this update, the Cluster Logging Operator used a cached client to fetch the SecurityContextConstraint cluster resource, which could result in an error when the cache is invalid. With this update, the Operator now always retrieves data from the API server instead of using a cache. ( LOG-6418 ) Before this update, the logging must-gather did not collect resources such as UIPlugin , ClusterLogForwarder , LogFileMetricExporter , and LokiStack . With this update, the must-gather now collects all of these resources and places them in their respective namespace directory instead of the cluster-logging directory. ( LOG-6422 ) Before this update, the Vector startup script attempted to delete buffer lock files during startup. With this update, the Vector startup script no longer attempts to delete buffer lock files during startup. ( LOG-6506 ) Before this update, the API documentation incorrectly claimed that lokiStack outputs would default the target namespace, which could prevent the collector from writing to that output. With this update, this claim has been removed from the API documentation and the Cluster Logging Operator now validates that a target namespace is present. ( LOG-6573 ) Before this update, the Cluster Logging Operator could deploy the collector with output configurations that were not referenced by any inputs. With this update, a validation check for the ClusterLogForwarder resource prevents the Operator from deploying the collector. ( LOG-6585 ) 2.2.2.3. CVEs CVE-2019-12900 2.2.3. Logging 6.1.1 Release Notes This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.1 . 2.2.3.1. New Features and Enhancements With this update, the Loki Operator supports configuring the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in OpenShift Container Platform 4.17 or later. ( LOG-6420 ) 2.2.3.2. Bug Fixes Before this update, the collector was discarding longer audit log messages with the following error message: Internal log [Found line that exceeds max_line_bytes; discarding.] . With this update, the discarding of longer audit messages is avoided by increasing the audit configuration thresholds: The maximum line size, max_line_bytes , is 3145728 bytes. The maximum number of bytes read during a read cycle, max_read_bytes , is 262144 bytes. ( LOG-6379 ) Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. ( LOG-6383 ) Before this update, pipeline validation might have entered an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. ( LOG-6405 ) Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. ( LOG-6407 ) Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. ( LOG-6449 ) Before this update, the ValidLokistackOTLPOutputs condition appeared in the status of the ClusterLogForwarder custom resource even when the output type is not LokiStack . With this update, the ValidLokistackOTLPOutputs condition is removed, and the validation messages for the existing output conditions are corrected. ( LOG-6469 ) Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. ( LOG-6484 ) Before this update, the must-gather script of the Red Hat OpenShift Logging Operator might have failed to gather the LokiStack data. With this update, the must-gather script is fixed, and the LokiStack data is gathered reliably. ( LOG-6498 ) Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. ( LOG-6533 ) 2.2.3.3. CVEs CVE-2019-12900 CVE-2024-2511 CVE-2024-3596 CVE-2024-4603 CVE-2024-4741 CVE-2024-5535 CVE-2024-10963 CVE-2024-50602 2.2.4. Logging 6.1.0 Release Notes This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.0 . 2.2.4.1. New Features and Enhancements 2.2.4.1.1. Log Collection This enhancement adds the source iostream to the attributes sent from collected container logs. The value is set to either stdout or stderr based on how the collector received it. ( LOG-5292 ) With this update, the default memory limit for the collector increases from 1024 Mi to 2048 Mi. Users should adjust resource limits based on their cluster's specific needs and specifications. ( LOG-6072 ) With this update, users can now set the syslog output delivery mode of the ClusterLogForwarder CR to either AtLeastOnce or AtMostOnce. ( LOG-6355 ) 2.2.4.1.2. Log Storage With this update, the new 1x.pico LokiStack size supports clusters with fewer workloads and lower log volumes (up to 50GB/day). ( LOG-5939 ) 2.2.4.2. Technology Preview Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, OpenTelemetry logs can now be forwarded using the OTel (OpenTelemetry) data model to a Red Hat Managed LokiStack instance. To enable this feature, add the observability.openshift.io/tech-preview-otlp-output: "enabled" annotation to your ClusterLogForwarder configuration. For additional configuration information, see OTLP Forwarding . With this update, a dataModel field has been added to the lokiStack output specification. Set the dataModel to Otel to configure log forwarding using the OpenTelemetry data format. The default is set to Viaq . For information about data mapping see OTLP Specification . 2.2.4.3. Bug Fixes None. 2.2.4.4. CVEs CVE-2024-6119 CVE-2024-6232 2.3. Logging 6.1 The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. 2.3.1. Inputs and outputs Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: application receiver infrastructure audit You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. 2.3.2. Receiver input type The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog . The ReceiverSpec field defines the configuration for a receiver input. 2.3.3. Pipelines and filters Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. 2.3.4. Operator behavior The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource: When set to Managed (default), the Operator actively manages the logging resources to match the configuration defined in the spec. When set to Unmanaged , the Operator does not take any action, allowing you to manually manage the logging components. 2.3.5. Validation Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. 2.3.6. Quick start OpenShift Logging supports two data models: ViaQ (General Availability) OpenTelemetry (Technology Preview) You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder . ViaQ is the default data model when forwarding logs to LokiStack. Note In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry. 2.3.6.1. Quick start with ViaQ To use the default ViaQ data model, follow these steps: Prerequisites Cluster administrator permissions Procedure Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. Create a LokiStack custom resource (CR) in the openshift-logging namespace: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging Note Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration. Create a service account for the collector: USD oc create sa collector -n openshift-logging Allow the collector's service account to write data to the LokiStack CR: USD oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector Note The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. Allow the collector's service account to collect logs: USD oc project openshift-logging USD oc adm policy add-cluster-role-to-user collect-application-logs -z collector USD oc adm policy add-cluster-role-to-user collect-audit-logs -z collector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector Note The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. Create a UIPlugin CR to enable the Log section in the Observe tab: apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki Create a ClusterLogForwarder CR to configure log forwarding: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack Note The dataModel field is optional and left unset ( dataModel: "" ) by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying dataModel: ViaQ ensures the configuration remains compatible if the default changes. Verification Verify that logs are visible in the Log section of the Observe tab in the OpenShift web console. 2.3.6.2. Quick start with OpenTelemetry Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: Prerequisites Cluster administrator permissions Procedure Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. Create a LokiStack custom resource (CR) in the openshift-logging namespace: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging Note Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". Create a service account for the collector: USD oc create sa collector -n openshift-logging Allow the collector's service account to write data to the LokiStack CR: USD oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector Note The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. Allow the collector's service account to collect logs: USD oc project openshift-logging USD oc adm policy add-cluster-role-to-user collect-application-logs -z collector USD oc adm policy add-cluster-role-to-user collect-audit-logs -z collector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector Note The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. Create a UIPlugin CR to enable the Log section in the Observe tab: apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki Create a ClusterLogForwarder CR to configure log forwarding: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: "enabled" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp 1 Use the annotation to enable the Otel data model, which is a Technology Preview feature. 2 Define the output type as lokiStack . 3 Specifies the OpenTelemetry data model. Note You cannot use lokiStack.labelKeys when dataModel is Otel . To achieve similar functionality when dataModel is Otel , refer to "Configuring LokiStack for OTLP data ingestion". Verification Verify that OTLP is functioning correctly by going to Observe OpenShift Logging LokiStack Writes in the OpenShift web console, and checking Distributor - Structured Metadata . 2.4. Configuring log forwarding The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. Key Functions of the ClusterLogForwarder Selects log messages using inputs Forwards logs to external destinations using outputs Filters, transforms, and drops log messages using filters Defines log forwarding pipelines connecting inputs, filters and outputs 2.4.1. Setting up log collection This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder . This was not required in releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. The Red Hat OpenShift Logging Operator provides collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. Setup log collection by binding the required cluster roles to your service account. 2.4.1.1. Legacy service accounts To use the existing legacy service account logcollector , create the following ClusterRoleBinding : USD oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector Additionally, create the following ClusterRoleBinding if collecting audit logs: USD oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector 2.4.1.2. Creating service accounts Prerequisites The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. You have administrator permissions. Procedure Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. Bind the appropriate cluster roles to the service account: Example binding command USD oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name> 2.4.1.2.1. Cluster Role Binding for your Service Account The role_binding.yaml file binds the ClusterLogging operator's ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8 1 roleRef: References the ClusterRole to which the binding applies. 2 apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. 3 kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. 4 name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. 5 subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. 6 kind: Specifies that the subject is a ServiceAccount. 7 Name: The name of the ServiceAccount being granted the permissions. 8 namespace: Indicates the namespace where the ServiceAccount is located. 2.4.1.2.2. Writing application logs The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Specifies the permissions granted by this ClusterRole. 2 apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. 3 loki.grafana.com: The API group for managing Loki-related resources. 4 resources: The resource type that the ClusterRole grants permission to interact with. 5 application: Refers to the application resources within the Loki logging system. 6 resourceNames: Specifies the names of resources that this role can manage. 7 logs: Refers to the log resources that can be created. 8 verbs: The actions allowed on the resources. 9 create: Grants permission to create new logs in the Loki system. 2.4.1.2.3. Writing audit logs The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Defines the permissions granted by this ClusterRole. 2 apiGroups: Specifies the API group loki.grafana.com. 3 loki.grafana.com: The API group responsible for Loki logging resources. 4 resources: Refers to the resource type this role manages, in this case, audit. 5 audit: Specifies that the role manages audit logs within Loki. 6 resourceNames: Defines the specific resources that the role can access. 7 logs: Refers to the logs that can be managed under this role. 8 verbs: The actions allowed on the resources. 9 create: Grants permission to create new audit logs. 2.4.1.2.4. Writing infrastructure logs The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. Sample YAML apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Specifies the API group for Loki-related resources. 3 loki.grafana.com: The API group managing the Loki logging system. 4 resources: Defines the resource type that this role can interact with. 5 infrastructure: Refers to infrastructure-related resources that this role manages. 6 resourceNames: Specifies the names of resources this role can manage. 7 logs: Refers to the log resources related to infrastructure. 8 verbs: The actions permitted by this role. 9 create: Grants permission to create infrastructure logs in the Loki system. 2.4.1.2.5. ClusterLogForwarder editor role The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Refers to the OpenShift-specific API group 3 obervability.openshift.io: The API group for managing observability resources, like logging. 4 resources: Specifies the resources this role can manage. 5 clusterlogforwarders: Refers to the log forwarding resources in OpenShift. 6 verbs: Specifies the actions allowed on the ClusterLogForwarders. 7 create: Grants permission to create new ClusterLogForwarders. 8 delete: Grants permission to delete existing ClusterLogForwarders. 9 get: Grants permission to retrieve information about specific ClusterLogForwarders. 10 list: Allows listing all ClusterLogForwarders. 11 patch: Grants permission to partially modify ClusterLogForwarders. 12 update: Grants permission to update existing ClusterLogForwarders. 13 watch: Grants permission to monitor changes to ClusterLogForwarders. 2.4.2. Modifying log level in collector To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace , debug , info , warn , error , and off . Example log level annotation apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug # ... 2.4.3. Managing the Operator The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: Managed (default) The operator will drive the logging resources to match the desired state in the CLF spec. Unmanaged The operator will not take any action related to the logging components. This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged . 2.4.4. Structure of the ClusterLogForwarder The CLF has a spec section that contains the following key components: Inputs Select log messages to be forwarded. Built-in input types application , infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. Outputs Define destinations to forward logs to. Each output has a unique name and type-specific configuration. Pipelines Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. Filters Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. 2.4.4.1. Inputs Inputs are configured in an array under spec.inputs . There are three built-in input types: application Selects logs from all application containers, excluding those in infrastructure namespaces. infrastructure Selects logs from nodes and from infrastructure components running in the following namespaces: default kube openshift Containing the kube- or openshift- prefix audit Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. 2.4.4.2. Outputs Outputs are configured in an array under spec.outputs . Each output must have a unique name and a type. Supported types are: azureMonitor Forwards logs to Azure Monitor. cloudwatch Forwards logs to AWS CloudWatch. googleCloudLogging Forwards logs to Google Cloud Logging. http Forwards logs to a generic HTTP endpoint. kafka Forwards logs to a Kafka broker. loki Forwards logs to a Loki logging backend. lokistack Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack's proxy uses OpenShift Container Platform authentication to enforce multi-tenancy otlp Forwards logs using the OpenTelemetry Protocol. splunk Forwards logs to Splunk. syslog Forwards logs to an external syslog server. Each output type has its own configuration fields. 2.4.5. Configuring OTLP output Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding. Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Procedure Create or edit a ClusterLogForwarder custom resource (CR) to enable forwarding using OTLP by adding the following annotation: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: "enabled" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp 1 Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature. 2 This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent. Note The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework. 2.4.5.1. Pipelines Pipelines are configured in an array under spec.pipelines . Each pipeline must have a unique name and consists of: inputRefs Names of inputs whose logs should be forwarded to this pipeline. outputRefs Names of outputs to send logs to. filterRefs (optional) Names of filters to apply. The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. 2.4.5.2. Filters Filters are configured in an array under spec.filters . They can match incoming log messages based on the value of structured fields and modify or drop them. Administrators can configure the following types of filters: 2.4.5.3. Enabling multi-line exception detection Enables multi-line error detection of container logs. Warning Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. Example java exception java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10) To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters . Example ClusterLogForwarder CR apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name> 2.4.5.3.1. Details When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message's content is replaced with the concatenated content of all the message fields in the sequence. The collector supports the following languages: Java JS Ruby Python Golang PHP Dart 2.4.5.4. Configuring content filters to drop unwanted log records When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. Procedure Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels."foo-bar/baz" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: "my-pod" 6 pipelines: - name: <pipeline_name> 7 filterRefs: ["<filter_name>"] # ... 1 Specifies the type of filter. The drop filter drops log records that match the filter configuration. 2 Specifies configuration options for applying the drop filter. 3 Specifies the configuration for tests that are used to evaluate whether a log record is dropped. If all the conditions specified for a test are true, the test passes and the log record is dropped. When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. 4 Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. 5 Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 6 Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 7 Specifies the pipeline that the drop filter is applied to. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml Additional examples The following additional example shows how you can configure the drop filter to only keep higher priority log records: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: "(?i)critical|error" - field: .level matches: "info|warning" # ... In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: "^open" - test: - field: .log_type matches: "application" - field: .kubernetes.pod_name notMatches: "my-pod" # ... 2.4.5.5. Overview of API audit filter OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: None : The event is dropped. Metadata : Audit metadata is included, request and response bodies are removed. Request : Audit metadata and the request body are included, the response body is removed. RequestResponse : All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy , while providing the following additional functions: Wildcards Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication . Resource \*/status matches Pod/status or Deployment/status . Default Rules Events that do not match any rule in the policy are filtered as follows: Read-only system events such as get , list , and watch are dropped. Service account write events that occur within the same namespace as the service account are dropped. All other events are forwarded, subject to any configured rate limits. To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. Omit Response Codes A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429] . If the value is an empty list, [] , then no status codes are omitted. The ClusterLogForwarder CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. Note You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. Example audit policy apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata 1 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. 2 The name of your audit policy. 2.4.5.6. Filtering application logs at input by including the label expressions or a matching label key and values You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. Procedure Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: ["prod", "qa"] 3 - key: zone operator: NotIn values: ["east", "west"] matchLabels: 4 app: one name: app1 type: application # ... 1 Specifies the label key to match. 2 Specifies the operator. Valid values include: In , NotIn , Exists , and DoesNotExist . 3 Specifies an array of string values. If the operator value is either Exists or DoesNotExist , the value array must be empty. 4 Specifies an exact key or value mapping. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.4.5.7. Configuring content filters to prune log records When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. Procedure Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: Important If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: ["<filter_name>"] # ... 1 Specify the type of filter. The prune filter prunes log records by configured fields. 2 Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . 3 Optional: Any fields that are specified in this array are removed from the log record. 4 Optional: Any fields that are not specified in this array are removed from the log record. 5 Specify the pipeline that the prune filter is applied to. Note The filters exempts the log_type , .log_source , and .message fields. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.4.6. Filtering the audit and infrastructure log inputs by source You can define the list of audit and infrastructure sources to collect the logs by using the input selector. Procedure Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn # ... 1 Specifies the list of infrastructure sources to collect. The valid sources include: node : Journal log from the node container : Logs from the workloads deployed in the namespaces 2 Specifies the list of audit sources to collect. The valid sources include: kubeAPI : Logs from the Kubernetes API servers openshiftAPI : Logs from the OpenShift API servers auditd : Logs from a node auditd service ovn : Logs from an open virtual network service Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.4.7. Filtering application logs at input by including or excluding the namespace or container name You can include or exclude the application logs based on the namespace and container name by using the input selector. Procedure Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: "my-project" 1 container: "my-container" 2 excludes: - container: "other-container*" 3 namespace: "other-namespace" 4 type: application # ... 1 Specifies that the logs are only collected from these namespaces. 2 Specifies that the logs are only collected from these containers. 3 Specifies the pattern of namespaces to ignore when collecting the logs. 4 Specifies the set of containers to ignore when collecting the logs. Note The excludes field takes precedence over the includes field. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.5. Storing logs with LokiStack You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. For long-term storage or queries over a long time period, users should look to log stores external to their cluster. 2.5.1. Loki deployment sizing Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities. The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. Important It is not possible to change the number 1x for the deployment size. Table 2.2. Loki sizing 1x.demo 1x.pico [6.1+ only] 1x.extra-small 1x.small 1x.medium Data transfer Demo use only 50GB/day 100GB/day 500GB/day 2TB/day Queries per second (QPS) Demo use only 1-25 QPS at 200ms 1-25 QPS at 200ms 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 2 2 2 Total CPU requests None 7 vCPUs 14 vCPUs 34 vCPUs 54 vCPUs Total CPU requests if using the ruler None 8 vCPUs 16 vCPUs 42 vCPUs 70 vCPUs Total memory requests None 17Gi 31Gi 67Gi 139Gi Total memory requests if using the ruler None 18Gi 35Gi 83Gi 171Gi Total disk requests 40Gi 590Gi 430Gi 430Gi 590Gi Total disk requests if using the ruler 80Gi 910Gi 750Gi 750Gi 910Gi 2.5.2. Prerequisites You have installed the Loki Operator by using the CLI or web console. You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder . The serviceAccount is assigned collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles. 2.5.3. Core Setup and Configuration Role-based access controls, basic monitoring, and pod placement to deploy Loki. 2.5.4. Authorizing LokiStack rules RBAC permissions Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. The following cluster roles for alerting and recording rules are available for LokiStack: Rule name Description alertingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch AlertingRule resources within the loki.grafana.com/v1 API group. alertingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to AlertingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. alertingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete AlertingRule resources. alertingrules.loki.grafana.com-v1-view Users with this role can read AlertingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. recordingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch RecordingRule resources within the loki.grafana.com/v1 API group. recordingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to RecordingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. recordingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete RecordingRule resources. recordingrules.loki.grafana.com-v1-view Users with this role can read RecordingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. 2.5.4.1. Examples To apply cluster roles for a user, you must bind an existing cluster role to a specific username. Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: Example cluster role binding command for alerting rule CRUD permissions in a specific namespace USD oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username> The following command gives the specified user administrator permissions for alerting rules in all namespaces: Example cluster role binding command for administrator permissions USD oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username> 2.5.5. Creating a log-based alerting rule with Loki The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. If an AlertingRule CR includes an invalid LogQL expr , it is an invalid alerting rule. If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. If none of the above applies, an alerting rule is considered valid. Table 2.3. AlertingRule definitions Tenant type Valid namespaces for AlertingRule CRs application <your_application_namespace> audit openshift-logging infrastructure openshift-/* , kube-/\* , default Procedure Create an AlertingRule custom resource (CR): Example infrastructure AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "infrastructure" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) / sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 AlertingRule CRs for infrastructure tenants are only supported in the openshift-* , kube-\* , or default namespaces. 4 The value for kubernetes_namespace_name: must match the value for metadata.namespace . 5 The value of this mandatory field must be critical , warning , or info . 6 This field is mandatory. 7 This field is mandatory. Example application AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "application" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 Value for kubernetes_namespace_name: must match the value for metadata.namespace . 4 The value of this mandatory field must be critical , warning , or info . 5 The value of this mandatory field is a summary of the rule. 6 The value of this mandatory field is a detailed description of the rule. Apply the AlertingRule CR: USD oc apply -f <filename>.yaml 2.5.6. Configuring Loki to tolerate memberlist creation failure In an OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: USD oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' Example LokiStack to include podIP apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... hashRing: type: memberlist memberlist: instanceAddrType: podIP # ... 2.5.7. Enabling stream-based retention with Loki You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Note Schema v13 is recommended. Procedure Create a LokiStack CR: Enable stream-based retention globally as shown in the following example: Example global stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream.spec: limits: Enable stream-based retention per-tenant basis as shown in the following example: Example per-tenant stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy by tenant. Valid tenant types are application , audit , and infrastructure . 2 Contains the LogQL query used to define the log stream. Apply the LokiStack CR: USD oc apply -f <filename>.yaml Note This is not for managing the retention for stored logs. Global retention periods for stored logs to a supported maximum of 30 days is configured with your object storage. 2.5.8. Loki pod placement You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. Example LokiStack with node selectors apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: "" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: "" gateway: nodeSelector: node-role.kubernetes.io/infra: "" indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" ingester: nodeSelector: node-role.kubernetes.io/infra: "" querier: nodeSelector: node-role.kubernetes.io/infra: "" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" ruler: nodeSelector: node-role.kubernetes.io/infra: "" # ... 1 Specifies the component pod type that applies to the node selector. 2 Specifies the pods that are moved to nodes containing the defined label. Example LokiStack CR with node selectors and tolerations apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved # ... To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: USD oc explain lokistack.spec.template Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec. ... For more detailed information, you can add a specific field: USD oc explain lokistack.spec.template.compactor Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it. ... 2.5.9. Enhanced Reliability and Performance Configurations to ensure Loki's reliability and efficiency in production. 2.5.10. Enabling authentication to cloud-based log stores using short-lived tokens Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. Procedure Use one of the following options to enable authentication: If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. If you use the OpenShift CLI ( oc ) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. Example Azure sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region> Example AWS sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN> 2.5.11. Configuring Loki to tolerate node failure The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor , distributor , gateway , indexGateway , ingester , querier , queryFrontend , and ruler components. You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: Example user settings for the ingester component apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: ingester: podAntiAffinity: # ... requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname # ... 1 The stanza to define a required rule. 2 The key-value pair (label) that must be matched to apply the rule. 2.5.12. LokiStack behavior during cluster restarts When an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. 2.5.13. Advanced Deployment and Scalability Specialized configurations for high availability, scalability, and error handling. 2.5.14. Zone aware data replication The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small , 1x.small , or 1x.medium , the replication.factor field is automatically set to 2. To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. Example LokiStack CR with zone replication enabled apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4 1 Deprecated field, values entered are overwritten by replication.factor . 2 This value is automatically set when deployment size is selected at setup. 3 The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. 4 Defines zones in the form of a topology key that corresponds to a node label. 2.5.15. Recovering Loki pods from failed zones In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. Loki pods are part of a StatefulSet , and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. Warning The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. Prerequisites Verify your LokiStack CR has a replication factor greater than 1. Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. Procedure List the pods in Pending status by running the following command: USD oc get pods --field-selector status.phase==Pending -n openshift-logging Example oc get pods output NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m 1 These pods are in Pending status because their corresponding PVCs are in the failed zone. List the PVCs in Pending status by running the following command: USD oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r Example oc get pvc output storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1 Delete the PVC(s) for a pod by running the following command: USD oc delete pvc <pvc_name> -n openshift-logging Delete the pod(s) by running the following command: USD oc delete pod <pod_name> -n openshift-logging Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. 2.5.15.1. Troubleshooting PVC in a terminating state The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection . Removing the finalizers should allow the PVCs to delete successfully. Remove the finalizer for each PVC by running the command below, then retry deletion. USD oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-logging 2.5.16. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 2.6. OTLP data ingestion in Loki You can use an API endpoint by using the OpenTelemetry Protocol (OTLP) with Logging 6.1. As OTLP is a standardized format not specifically designed for Loki, OTLP requires an additional Loki configuration to map data format of OpenTelemetry to data model of Loki. OTLP lacks concepts such as stream labels or structured metadata . Instead, OTLP provides metadata about log entries as attributes , grouped into the following three categories: Resource Scope Log You can set metadata for multiple entries simultaneously or individually as needed. 2.6.1. Configuring LokiStack for OTLP data ingestion Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps: Prerequisites Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion. Procedure Set the schema version: When creating a new LokiStack CR, set version: v13 in the storage schema configuration. Note For existing configurations, add a new schema entry with version: v13 and an effectiveDate in the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation). Configure the storage schema as follows: Example configure storage schema # ... spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25 Once the effectiveDate has passed, the v13 schema takes effect, enabling your LokiStack to store structured metadata. 2.6.2. Attribute mapping When you set the Loki Operator to the openshift-logging mode, Loki Operator automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with stream labels and structured metadata of Loki. For typical setups, these default mappings are sufficient. However, you might need to customize attribute mapping in the following cases: Using a custom collector: If your setup includes a custom collector that generates additional attributes, consider customizing the mapping to ensure these attributes are retained in Loki. Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process. Important Attributes that are not mapped to either stream labels or structured metadata are not stored in Loki. 2.6.2.1. Custom attribute mapping for OpenShift When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift default values, but you can configure custom mappings to adjust default values. In the openshift-logging mode, you can configure custom attribute mappings globally for all tenants or for individual tenants as needed. When you define custom mappings, they are appended to the OpenShift default values. If you do not need default labels, you can disable them in the tenant configuration. Note A major difference between the Loki Operator and Loki lies in inheritance handling. Loki copies only default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in the openshift-logging mode. Within LokiStack , attribute mapping configuration is managed through the limits setting. See the following example LokiStack configuration: # ... spec: limits: global: otlp: {} 1 tenants: application: otlp: {} 2 1 Defines global OTLP attribute configuration. 2 OTLP attribute configuration for the application tenant within openshift-logging mode. Note Both global and per-tenant OTLP configurations can map attributes to stream labels or structured metadata. At least one stream label is required to save a log entry to Loki storage, so ensure this configuration meets that requirement. Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects: spec: limits: global: otlp: streamLabels: resourceAttributes: - name: "k8s.namespace.name" - name: "k8s.pod.name" - name: "k8s.container.name" Structured metadata, in contrast, can be generated from resource, scope or log-level attributes: # ... spec: limits: global: otlp: streamLabels: # ... structuredMetadata: resourceAttributes: - name: "process.command_line" - name: "k8s\\.pod\\.labels\\..+" regex: true scopeAttributes: - name: "service.name" logAttributes: - name: "http.route" Tip Use regular expressions by setting regex: true for attributes names when mapping similar attributes in Loki. Important Avoid using regular expressions for stream labels, as this can increase data volume. 2.6.2.2. Customizing OpenShift defaults In openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended , might be disabled if performance is impacted. When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or structured metadata, use custom configuration. Custom configurations can merge with default configurations. 2.6.2.3. Removing recommended attributes To reduce default attributes in openshift-logging mode, disable recommended attributes: # ... spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1 1 Set disableRecommendedAttributes: true to remove recommended attributes, which limits default attributes to the required attributes . Note This option is beneficial if the default attributes causes performance or storage issues. This setting might negatively impact query performance, as it removes default stream labels. You should pair this option with a custom attribute configuration to retain attributes essential for queries. 2.6.3. Additional resources Loki labels Structured metadata OpenTelemetry attribute 2.7. OpenTelemetry data model This document outlines the protocol and semantic conventions for Red Hat OpenShift Logging's OpenTelemetry support with Logging 6.1. Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.7.1. Forwarding and ingestion protocol Red Hat OpenShift Logging collects and forwards logs to OpenTelemetry endpoints using OTLP Specification . OTLP encodes, transports, and delivers telemetry data. You can also deploy Loki storage, which provides an OTLP endpont to ingest log streams. This document defines the semantic conventions for the logs collected from various OpenShift cluster sources. 2.7.2. Semantic conventions The log collector in this solution gathers the following log streams: Container logs Cluster node journal logs Cluster node auditd logs Kubernetes and OpenShift API server logs OpenShift Virtual Network (OVN) logs You can forward these streams according to the semantic conventions defined by OpenTelemetry semantic attributes. The semantic conventions in OpenTelemetry define a resource as an immutable representation of the entity producing telemetry, identified by attributes. For example, a process running in a container includes attributes such as container_name , cluster_id , pod_name , namespace , and possibly deployment or app_name . These attributes are grouped under the resource object, which helps reduce repetition and optimizes log transmission as telemetry data. In addition to resource attributes, logs might also contain scope attributes specific to instrumentation libraries and log attributes specific to each log entry. These attributes provide greater detail about each log entry and enhance filtering capabilities when querying logs in storage. The following sections define the attributes that are generally forwarded. 2.7.2.1. Log entry structure All log streams include the following log data fields: The Applicable Sources column indicates which log sources each field applies to: all : This field is present in all logs. container : This field is present in Kubernetes container logs, both application and infrastructure. audit : This field is present in Kubernetes, OpenShift API, and OVN logs. auditd : This field is present in node auditd logs. journal : This field is present in node journal logs. Name Applicable Sources Comment body all observedTimeUnixNano all timeUnixNano all severityText container, journal attributes all (Optional) Present when forwarding stream specific attributes 2.7.2.2. Attributes Log entries include a set of resource, scope, and log attributes based on their source, as described in the following table. The Location column specifies the type of attribute: resource : Indicates a resource attribute scope : Indicates a scope attribute log : Indicates a log attribute The Storage column indicates whether the attribute is stored in a LokiStack using the default openshift-logging mode and specifies where the attribute is stored: stream label : Enables efficient filtering and querying based on specific labels. Can be labeled as required if the Loki Operator enforces this attribute in the configuration. structured metadata : Allows for detailed filtering and storage of key-value pairs. Enables users to use direct labels for streamlined queries without requiring JSON parsing. With OTLP, users can filter queries directly by labels rather than using JSON parsing, improving the speed and efficiency of queries. Name Location Applicable Sources Storage (LokiStack) Comment log_source resource all required stream label (DEPRECATED) Compatibility attribute, contains same information as openshift.log.source log_type resource all required stream label (DEPRECATED) Compatibility attribute, contains same information as openshift.log.type kubernetes.container_name resource container stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.container.name kubernetes.host resource all stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.node.name kubernetes.namespace_name resource container required stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.namespace.name kubernetes.pod_name resource container stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.pod.name openshift.cluster_id resource all (DEPRECATED) Compatibility attribute, contains same information as openshift.cluster.uid level log container, journal (DEPRECATED) Compatibility attribute, contains same information as severityText openshift.cluster.uid resource all required stream label openshift.log.source resource all required stream label openshift.log.type resource all required stream label openshift.labels.* resource all structured metadata k8s.node.name resource all stream label k8s.namespace.name resource container required stream label k8s.container.name resource container stream label k8s.pod.labels.* resource container structured metadata k8s.pod.name resource container stream label k8s.pod.uid resource container structured metadata k8s.cronjob.name resource container stream label Conditionally forwarded based on creator of pod k8s.daemonset.name resource container stream label Conditionally forwarded based on creator of pod k8s.deployment.name resource container stream label Conditionally forwarded based on creator of pod k8s.job.name resource container stream label Conditionally forwarded based on creator of pod k8s.replicaset.name resource container structured metadata Conditionally forwarded based on creator of pod k8s.statefulset.name resource container stream label Conditionally forwarded based on creator of pod log.iostream log container structured metadata k8s.audit.event.level log audit structured metadata k8s.audit.event.stage log audit structured metadata k8s.audit.event.user_agent log audit structured metadata k8s.audit.event.request.uri log audit structured metadata k8s.audit.event.response.code log audit structured metadata k8s.audit.event.annotation.* log audit structured metadata k8s.audit.event.object_ref.resource log audit structured metadata k8s.audit.event.object_ref.name log audit structured metadata k8s.audit.event.object_ref.namespace log audit structured metadata k8s.audit.event.object_ref.api_group log audit structured metadata k8s.audit.event.object_ref.api_version log audit structured metadata k8s.user.username log audit structured metadata k8s.user.groups log audit structured metadata process.executable.name resource journal structured metadata process.executable.path resource journal structured metadata process.command_line resource journal structured metadata process.pid resource journal structured metadata service.name resource journal stream label systemd.t.* log journal structured metadata systemd.u.* log journal structured metadata Note Attributes marked as Compatibility attribute support minimal backward compatibility with the ViaQ data model. These attributes are deprecated and function as a compatibility layer to ensure continued UI functionality. These attributes will remain supported until the Logging UI fully supports the OpenTelemetry counterparts in future releases. Loki changes the attribute names when persisting them to storage. The names will be lowercased, and all characters in the set: ( . , / , - ) will be replaced by underscores ( _ ). For example, k8s.namespace.name will become k8s_namespace_name . 2.7.3. Additional resources Semantic Conventions Logs Data Model General Logs Attributes 2.8. Visualization for logging Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator , which requires Operator installation. Important Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA.
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging", "oc create sa collector -n openshift-logging", "oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector", "oc project openshift-logging", "oc adm policy add-cluster-role-to-user collect-application-logs -z collector", "oc adm policy add-cluster-role-to-user collect-audit-logs -z collector", "oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector", "apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging", "oc create sa collector -n openshift-logging", "oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector", "oc project openshift-logging", "oc adm policy add-cluster-role-to-user collect-application-logs -z collector", "oc adm policy add-cluster-role-to-user collect-audit-logs -z collector", "oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector", "apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp", "oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector", "oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector", "oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector", "oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp", "java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)", "apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn", "oc apply -f <filename>.yaml", "apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application", "oc apply -f <filename>.yaml", "oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>", "oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>", "apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7", "apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6", "oc apply -f <filename>.yaml", "oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging", "oc apply -f <filename>.yaml", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved", "oc explain lokistack.spec.template", "KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.", "oc explain lokistack.spec.template.compactor", "KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4", "oc get pods --field-selector status.phase==Pending -n openshift-logging", "NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m", "oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r", "storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1", "oc delete pvc <pvc_name> -n openshift-logging", "oc delete pod <pod_name> -n openshift-logging", "oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging", "\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}", "429 Too Many Requests Ingestion rate limit exceeded", "2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true", "level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2", "spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25", "spec: limits: global: otlp: {} 1 tenants: application: otlp: {} 2", "spec: limits: global: otlp: streamLabels: resourceAttributes: - name: \"k8s.namespace.name\" - name: \"k8s.pod.name\" - name: \"k8s.container.name\"", "spec: limits: global: otlp: streamLabels: structuredMetadata: resourceAttributes: - name: \"process.command_line\" - name: \"k8s\\\\.pod\\\\.labels\\\\..+\" regex: true scopeAttributes: - name: \"service.name\" logAttributes: - name: \"http.route\"", "spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/logging/logging-6-1
Chapter 54. EntityUserOperatorSpec schema reference
Chapter 54. EntityUserOperatorSpec schema reference Used in: EntityOperatorSpec Full list of EntityUserOperatorSpec schema properties Configures the User Operator. 54.1. Logging The User Operator has a configurable logger: rootLogger.level The User Operator uses the Apache log4j2 logger implementation. Use the logging property in the entityOperator.userOperator field of the Kafka resource to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the rootLogger.level . You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 logging: type: inline loggers: rootLogger.level: INFO logger.uop.name: io.strimzi.operator.user 1 logger.uop.level: DEBUG 2 logger.abstractcache.name: io.strimzi.operator.user.operator.cache.AbstractCache 3 logger.abstractcache.level: TRACE 4 logger.jetty.level: DEBUG 5 # ... 1 Creates a logger for the user package. 2 Sets the logging level for the user package. 3 Creates a logger for the AbstractCache class. 4 Sets the logging level for the AbstractCache class. 5 Changes the logging level for the default jetty logger. The jetty logger is part of the logging configuration provided with Streams for Apache Kafka. By default, it is set to INFO . Note When investigating an issue with the operator, it's usually sufficient to change the rootLogger to DEBUG to get more detailed logs. However, keep in mind that setting the log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: user-operator-log4j2.properties # ... Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 54.2. EntityUserOperatorSpec schema properties Property Property type Description watchedNamespace string The namespace the User Operator should watch. image string The image to use for the User Operator. reconciliationIntervalSeconds integer The reconciliationIntervalSeconds property has been deprecated, and should now be configured using .spec.entityOperator.userOperator.reconciliationIntervalMs . Interval between periodic reconciliations in seconds. Ignored if reconciliationIntervalMs is set. reconciliationIntervalMs integer Interval between periodic reconciliations in milliseconds. zookeeperSessionTimeoutSeconds integer The zookeeperSessionTimeoutSeconds property has been deprecated. This property has been deprecated because ZooKeeper is not used anymore by the User Operator. Timeout for the ZooKeeper session. secretPrefix string The prefix that will be added to the KafkaUser name to be used as the Secret name. livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. resources ResourceRequirements CPU and memory resources to reserve. logging InlineLogging , ExternalLogging Logging configuration. jvmOptions JvmOptions JVM Options for pods.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 logging: type: inline loggers: rootLogger.level: INFO logger.uop.name: io.strimzi.operator.user 1 logger.uop.level: DEBUG 2 logger.abstractcache.name: io.strimzi.operator.user.operator.cache.AbstractCache 3 logger.abstractcache.level: TRACE 4 logger.jetty.level: DEBUG 5 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalMs: 60000 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: user-operator-log4j2.properties #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-entityuseroperatorspec-reference
Chapter 11. VolumeSnapshot [snapshot.storage.k8s.io/v1]
Chapter 11. VolumeSnapshot [snapshot.storage.k8s.io/v1] Description VolumeSnapshot is a user's request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot. Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required. status object status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. 11.1.1. .spec Description spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required. Type object Required source Property Type Description source object source specifies where a snapshot will be created from. This field is immutable after creation. Required. volumeSnapshotClassName string VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field. 11.1.2. .spec.source Description source specifies where a snapshot will be created from. This field is immutable after creation. Required. Type object Property Type Description persistentVolumeClaimName string persistentVolumeClaimName specifies the name of the PersistentVolumeClaim object representing the volume from which a snapshot should be created. This PVC is assumed to be in the same namespace as the VolumeSnapshot object. This field should be set if the snapshot does not exists, and needs to be created. This field is immutable. volumeSnapshotContentName string volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. 11.1.3. .status Description status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. Type object Property Type Description boundVolumeSnapshotContentName string boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. creationTime string creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. error object error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. readyToUse boolean readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. restoreSize integer-or-string restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. volumeGroupSnapshotName string VolumeGroupSnapshotName is the name of the VolumeGroupSnapshot of which this VolumeSnapshot is a part of. 11.1.4. .status.error Description error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. Type object Property Type Description message string message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. time string time is the timestamp when the error was encountered. 11.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshots GET : list objects of kind VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots DELETE : delete collection of VolumeSnapshot GET : list objects of kind VolumeSnapshot POST : create a VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name} DELETE : delete a VolumeSnapshot GET : read the specified VolumeSnapshot PATCH : partially update the specified VolumeSnapshot PUT : replace the specified VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name}/status GET : read status of the specified VolumeSnapshot PATCH : partially update status of the specified VolumeSnapshot PUT : replace status of the specified VolumeSnapshot 11.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshots HTTP method GET Description list objects of kind VolumeSnapshot Table 11.1. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotList schema 401 - Unauthorized Empty 11.2.2. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots HTTP method DELETE Description delete collection of VolumeSnapshot Table 11.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshot Table 11.3. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshot Table 11.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.5. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.6. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 202 - Accepted VolumeSnapshot schema 401 - Unauthorized Empty 11.2.3. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name} Table 11.7. Global path parameters Parameter Type Description name string name of the VolumeSnapshot HTTP method DELETE Description delete a VolumeSnapshot Table 11.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 11.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshot Table 11.10. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshot Table 11.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.12. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshot Table 11.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.14. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.15. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 401 - Unauthorized Empty 11.2.4. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name}/status Table 11.16. Global path parameters Parameter Type Description name string name of the VolumeSnapshot HTTP method GET Description read status of the specified VolumeSnapshot Table 11.17. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeSnapshot Table 11.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.19. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeSnapshot Table 11.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.21. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.22. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/storage_apis/volumesnapshot-snapshot-storage-k8s-io-v1
Disconnected installation mirroring
Disconnected installation mirroring OpenShift Container Platform 4.13 Mirroring the installation container images Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/disconnected_installation_mirroring/index
Working with DNS in Identity Management
Working with DNS in Identity Management Red Hat Enterprise Linux 8 Managing the IdM-integrated DNS service Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/working_with_dns_in_identity_management/index
6.14.3. Configuring a Two-Node Cluster
6.14.3. Configuring a Two-Node Cluster If you are configuring a two-node cluster, you can execute the following command to allow a single node to maintain quorum (for example, if one node fails): Note that this command resets all other properties that you can set with the --setcman option to their default values, as described in Section 6.1.5, "Commands that Overwrite Settings" . When you use the ccs --setcman command to add, remove, or modify the two_node option, you must restart the cluster for this change to take effect. For information on starting and stopping a cluster with the ccs command, see Section 7.2, "Starting and Stopping a Cluster" .
[ "ccs -h host --setcman two_node=1 expected_votes=1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-twonodeclust-ccs-ca
Chapter 7. Configuring SCAP contents
Chapter 7. Configuring SCAP contents You can upload SCAP data streams and tailoring files to define compliance policies. 7.1. Loading the default SCAP contents By loading the default SCAP contents on Satellite Server, you ensure that the data streams from the SCAP Security Guide (SSG) are loaded and assigned to all organizations and locations. SSG is provided by the operating system of Satellite Server and installed in /usr/share/xml/scap/ssg/content/ . Note that the available data streams depend on the operating system version on which Satellite runs. You can only use this SCAP content to scan hosts that have the same minor RHEL version as your Satellite Server. For more information, see Section 7.2, "Getting supported SCAP contents for RHEL" . Prerequisites Your user account has a role assigned that has the create_scap_contents permission. Procedure Use the following Hammer command on Satellite Server: 7.2. Getting supported SCAP contents for RHEL You can get the latest SCAP Security Guide (SSG) for Red Hat Enterprise Linux on the Red Hat Customer Portal. You have to get a version of SSG that is designated for the minor RHEL version of your hosts. Procedure Access the SCAP Security Guide in the package browser . From the Version menu, select the latest SSG version for the minor version of RHEL that your hosts are running. For example, for RHEL 8.6, select a version named *.el8_6 . Download the package RPM. Extract the data-stream file ( *-ds.xml ) from the RPM. For example: Upload the data stream to Satellite. For more information, see Section 7.3, "Uploading additional SCAP content" . Additional resources Supported versions of the SCAP Security Guide in RHEL in the Red Hat Knowledgebase SCAP Security Guide profiles supported in RHEL 9 in Red Hat Enterprise Linux 9 Security hardening SCAP Security Guide profiles supported in RHEL 8 in Red Hat Enterprise Linux 8 Security hardening SCAP Security Guide profiles supported in RHEL 7 in the Red Hat Enterprise Linux 7 Security Guide 7.3. Uploading additional SCAP content You can upload additional SCAP content into Satellite Server, either content created by yourself or obtained elsewhere. Note that Red Hat only provides support for SCAP content obtained from Red Hat. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Your user account has a role assigned that has the create_scap_contents permission. You have acquired a SCAP data-stream file. Procedure In the Satellite web UI, navigate to Hosts > Compliance > SCAP contents . Click Upload New SCAP Content . Enter a title in the Title text box, such as My SCAP Content . In Scap File , click Choose file , navigate to the location containing a SCAP data-stream file and click Open . On the Locations tab, select locations. On the Organizations tab, select organizations. Click Submit . If the SCAP content file is loaded successfully, a message similar to Successfully created My SCAP Content is displayed. CLI procedure Place the SCAP data-stream file to a directory on your Satellite Server, such as /usr/share/xml/scap/my_content/ . Run the following Hammer command on Satellite Server: Verification List the available SCAP contents . The list of SCAP contents includes the new title. 7.4. Tailoring XCCDF profiles You can customize existing XCCDF profiles using tailoring files without editing the original SCAP content. A single tailoring file can contain customizations of multiple XCCDF profiles. You can create a tailoring file using the SCAP Workbench tool. For more information on using the SCAP Workbench tool, see Customizing SCAP Security Guide for your use case . Then you can assign a tailoring file to a compliance policy to customize an XCCDF profile in the policy. 7.5. Uploading a tailoring file After uploading a tailoring file, you can apply it in a compliance policy to customize an XCCDF profile. Prerequisites Your user account has a role assigned that has the create_tailoring_files permission. Procedure In the Satellite web UI, navigate to Hosts > Compliance > Tailoring Files and click New Tailoring File . Enter a name in the Name text box. Click Choose File , navigate to the location containing the tailoring file and select Open . Click Submit to upload the chosen tailoring file.
[ "hammer scap-content bulk-upload --type default", "rpm2cpio scap-security-guide-0.1.69-3.el8_6.noarch.rpm | cpio -iv --to-stdout ./usr/share/xml/scap/ssg/content/ssg-rhel8-ds.xml > ssg-rhel-8.6-ds.xml", "hammer scap-content bulk-upload --type directory --directory /usr/share/xml/scap/my_content/ --location \" My_Location \" --organization \" My_Organization \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_security_compliance/Configuring_SCAP_Contents_security-compliance
40.6. Understanding /dev/oprofile/
40.6. Understanding /dev/oprofile/ The /dev/oprofile/ directory contains the file system for OProfile. Use the cat command to display the values of the virtual files in this file system. For example, the following command displays the type of processor OProfile detected: A directory exists in /dev/oprofile/ for each counter. For example, if there are 2 counters, the directories /dev/oprofile/0/ and dev/oprofile/1/ exist. Each directory for a counter contains the following files: count - The interval between samples. enabled - If 0, the counter is off and no samples are collected for it; if 1, the counter is on and samples are being collected for it. event - The event to monitor. kernel - If 0, samples are not collected for this counter event when the processor is in kernel-space; if 1, samples are collected even if the processor is in kernel-space. unit_mask - Defines which unit masks are enabled for the counter. user - If 0, samples are not collected for the counter event when the processor is in user-space; if 1, samples are collected even if the processor is in user-space. The values of these files can be retrieved with the cat command. For example:
[ "cat /dev/oprofile/cpu_type", "cat /dev/oprofile/0/count" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/OProfile-Understanding_devoprofile
Chapter 21. level
Chapter 21. level The logging level from various sources, including rsyslog(severitytext property) , a Python logging module, and others. The following values come from syslog.h , and are preceded by their numeric equivalents : 0 = emerg , system is unusable. 1 = alert , action must be taken immediately. 2 = crit , critical conditions. 3 = err , error conditions. 4 = warn , warning conditions. 5 = notice , normal but significant condition. 6 = info , informational. 7 = debug , debug-level messages. The two following values are not part of syslog.h but are widely used: 8 = trace , trace-level messages, which are more verbose than debug messages. 9 = unknown , when the logging system gets a value it doesn't recognize. Map the log levels or priorities of other logging systems to their nearest match in the preceding list. For example, from python logging , you can match CRITICAL with crit , ERROR with err , and so on. Data type keyword Example value info
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/level
Chapter 1. Managing Ansible Content Collection synclists in automation hub
Chapter 1. Managing Ansible Content Collection synclists in automation hub You can use Ansible automation hub to distribute the relevant Red Hat Certified collections content to your users by creating synclists. 1.1. About Red Hat Ansible Certified Content Collections synclists A synclist is a curated group of Red Hat Certified collections that is assembled by your organization administrator that synchronizes with your local Ansible automation hub. You can use synclists to manage only the content that you want and exclude unnecessary collections. You can design and manage your synclist from the content available as part of Red Hat content on console.redhat.com Each synclist has its own unique repository URL that you can use to designate as a remote source for content in automation hub and is securely accessed using an API token. Note Initially, Ansible validated content is only available through the private automation hub installer. Organization Administrators can preload all Ansible validated content collections at install time and put them into a staging state where they can be reviewed to avoid uploading unnecessary collections. 1.2. Creating a synclist of Red Hat Ansible Certified Content Collections You can create a synclist of curated Red Hat Ansible Certified Content in Ansible automation hub on console.redhat.com. Your synclist repository is located under Automation Hub Repo Management , which is updated whenever you choose to manage content within Ansible Certified Content Collections. All Ansible Certified Content Collections are included by default in your initial organization synclist. Prerequisites You have a valid Ansible Automation Platform subscription. You have Organization Administrator permissions for console.redhat.com. The following domain names are part of either the firewall or the proxy's allowlist for successful connection and download of collections from automation hub or Galaxy server: galaxy.ansible.com cloud.redhat.com console.redhat.com sso.redhat.com Ansible automation hub resources are stored in Amazon Simple Storage and the following domain name is in the allow list: automation-hub-prd.s3.us-east-2.amazonaws.com ansible-galaxy.s3.amazonaws.com SSL inspection is disabled either when using self signed certificates or for the Red Hat domains. Procedure Log in to console.redhat.com . Navigate to Automation Hub Collections . Use the toggle switch on each collection to determine whether to exclude it from your synclist. When you finish managing collections for your synclist, navigate to Automation Hub Repo Management to initiate the remote repository synchronization to your private automation hub. Optional: If your remote repository is already configured, you can manually synchronize Red Hat Ansible Certified Content Collections to your private automation hub to update the collections content that you made available to local users.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/managing_red_hat_certified_and_ansible_galaxy_collections_in_automation_hub/assembly-synclists
Chapter 3. Benchmarking Data Grid on OpenShift
Chapter 3. Benchmarking Data Grid on OpenShift For Data Grid clusters running on OpenShift, Red Hat recommends using Hyperfoil to measure performance. Hyperfoil is a benchmarking framework that provides accurate performance results for distributed services. 3.1. Benchmarking Data Grid After you set up and configure your deployment, start benchmarking your Data Grid cluster to analyze and measure performance. Benchmarking shows you where limits exist so you can adjust your environment and tune your Data Grid configuration to get the best performance, which means achieving the lowest latency and highest throughput possible. It is worth noting that optimal performance is a continual process, not an ultimate goal. When your benchmark tests show that your Data Grid deployment has reached a desired level of performance, you cannot expect those results to be fixed or always valid. 3.2. Installing Hyperfoil Set up Hyperfoil on Red Hat OpenShift by creating an operator subscription and downloading the Hyperfoil distribution that includes the command line interface (CLI). Procedure Create a Hyperfoil Operator subscription through the OperatorHub in the OpenShift Web Console. Note Hyperfoil Operator is available as a Community Operator. Red Hat does not certify the Hyperfoil Operator and does not provide support for it in combination with Data Grid. When you install the Hyperfoil Operator you are prompted to acknowledge a warning about the community version before you can continue. Download the latest Hyperfoil version from the Hyperfoil release page . Additional resources hyperfoil.io Installing Hyperfoil on OpenShift 3.3. Creating a Hyperfoil Controller Instantiate a Hyperfoil Controller on Red Hat OpenShift so you can upload and run benchmark tests with the Hyperfoil Command Line Interface (CLI). Prerequisites Create a Hyperfoil Operator subscription. Procedure Define hyperfoil-controller.yaml . USD cat > hyperfoil-controller.yaml<<EOF apiVersion: hyperfoil.io/v1alpha2 kind: Hyperfoil metadata: name: hyperfoil spec: version: latest EOF Apply the Hyperfoil Controller. USD oc apply -f hyperfoil-controller.yaml Retrieve the route that connects you to the Hyperfoil CLI. USD oc get routes NAME HOST/PORT hyperfoil hyperfoil-benchmark.apps.example.net 3.4. Running Hyperfoil benchmarks Run benchmark tests with Hyperfoil to collect performance data for Data Grid clusters. Prerequisites Create a Hyperfoil Operator subscription. Instantiate a Hyperfoil Controller on Red Hat OpenShift. Procedure Create a benchmark test. USD cat > hyperfoil-benchmark.yaml<<EOF name: hotrod-benchmark hotrod: # Replace <USERNAME>:<PASSWORD> with your Data Grid credentials. # Replace <SERVICE_HOSTNAME>:<PORT> with the host name and port for Data Grid. - uri: hotrod://<USERNAME>:<PASSWORD>@<SERVICE_HOSTNAME>:<PORT> caches: # Replace <CACHE-NAME> with the name of your Data Grid cache. - <CACHE-NAME> agents: agent-1: agent-2: agent-3: agent-4: agent-5: phases: - rampupPut: increasingRate: duration: 10s initialUsersPerSec: 100 targetUsersPerSec: 200 maxSessions: 300 scenario: &put - putData: - randomInt: cacheKey <- 1 .. 40000 - randomUUID: cacheValue - hotrodRequest: # Replace <CACHE-NAME> with the name of your Data Grid cache. put: <CACHE-NAME> key: key-USD{cacheKey} value: value-USD{cacheValue} - rampupGet: increasingRate: duration: 10s initialUsersPerSec: 100 targetUsersPerSec: 200 maxSessions: 300 scenario: &get - getData: - randomInt: cacheKey <- 1 .. 40000 - hotrodRequest: # Replace <CACHE-NAME> with the name of your Data Grid cache. get: <CACHE-NAME> key: key-USD{cacheKey} - doPut: constantRate: startAfter: rampupPut duration: 5m usersPerSec: 10000 maxSessions: 11000 scenario: *put - doGet: constantRate: startAfter: rampupGet duration: 5m usersPerSec: 40000 maxSessions: 41000 scenario: *get EOF Open the route in any browser to access the Hyperfoil CLI. Upload the benchmark test. Run the upload command. [hyperfoil]USD upload Click Select benchmark file and then navigate to the benchmark test on your file system and upload it. Run the benchmark test. [hyperfoil]USD run hotrod-benchmark Get results of the benchmark test. [hyperfoil]USD stats 3.5. Hyperfoil benchmark results Hyperfoil prints results of the benchmarking run in table format with the stats command. [hyperfoil]USD stats Total stats from run <run_id> PHASE METRIC THROUGHPUT REQUESTS MEAN p50 p90 p99 p99.9 p99.99 TIMEOUTS ERRORS BLOCKED Table 3.1. Column descriptions Column Description Value PHASE For each run, Hyperfoil makes GET requests and PUT requests to the Data Grid cluster in two phases. Either doGet or doPut METRIC During both phases of the run, Hyperfoil collects metrics for each GET and PUT request. Either getData or putData THROUGHPUT Captures the total number of requests per second. Number REQUESTS Captures the total number of operations during each phase of the run. Number MEAN Captures the average time for GET or PUT operations to complete. Time in milliseconds ( ms ) p50 Records the amount of time that it takes for 50 percent of requests to complete. Time in milliseconds ( ms ) p90 Records the amount of time that it takes for 90 percent of requests to complete. Time in milliseconds ( ms ) p99 Records the amount of time that it takes for 99 percent of requests to complete. Time in milliseconds ( ms ) p99.9 Records the amount of time that it takes for 99.9 percent of requests to complete. Time in milliseconds ( ms ) p99.99 Records the amount of time that it takes for 99.99 percent of requests to complete. Time in milliseconds ( ms ) TIMEOUTS Captures the total number of timeouts that occurred for operations during each phase of the run. Number ERRORS Captures the total number of errors that occurred during each phase of the run. Number BLOCKED Captures the total number of operations that were blocked or could not complete. Number
[ "cat > hyperfoil-controller.yaml<<EOF apiVersion: hyperfoil.io/v1alpha2 kind: Hyperfoil metadata: name: hyperfoil spec: version: latest EOF", "oc apply -f hyperfoil-controller.yaml", "oc get routes NAME HOST/PORT hyperfoil hyperfoil-benchmark.apps.example.net", "cat > hyperfoil-benchmark.yaml<<EOF name: hotrod-benchmark hotrod: # Replace <USERNAME>:<PASSWORD> with your Data Grid credentials. # Replace <SERVICE_HOSTNAME>:<PORT> with the host name and port for Data Grid. - uri: hotrod://<USERNAME>:<PASSWORD>@<SERVICE_HOSTNAME>:<PORT> caches: # Replace <CACHE-NAME> with the name of your Data Grid cache. - <CACHE-NAME> agents: agent-1: agent-2: agent-3: agent-4: agent-5: phases: - rampupPut: increasingRate: duration: 10s initialUsersPerSec: 100 targetUsersPerSec: 200 maxSessions: 300 scenario: &put - putData: - randomInt: cacheKey <- 1 .. 40000 - randomUUID: cacheValue - hotrodRequest: # Replace <CACHE-NAME> with the name of your Data Grid cache. put: <CACHE-NAME> key: key-USD{cacheKey} value: value-USD{cacheValue} - rampupGet: increasingRate: duration: 10s initialUsersPerSec: 100 targetUsersPerSec: 200 maxSessions: 300 scenario: &get - getData: - randomInt: cacheKey <- 1 .. 40000 - hotrodRequest: # Replace <CACHE-NAME> with the name of your Data Grid cache. get: <CACHE-NAME> key: key-USD{cacheKey} - doPut: constantRate: startAfter: rampupPut duration: 5m usersPerSec: 10000 maxSessions: 11000 scenario: *put - doGet: constantRate: startAfter: rampupGet duration: 5m usersPerSec: 40000 maxSessions: 41000 scenario: *get EOF", "[hyperfoil]USD upload", "[hyperfoil]USD run hotrod-benchmark", "[hyperfoil]USD stats", "[hyperfoil]USD stats Total stats from run <run_id> PHASE METRIC THROUGHPUT REQUESTS MEAN p50 p90 p99 p99.9 p99.99 TIMEOUTS ERRORS BLOCKED" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_performance_and_sizing_guide/benchmarking-datagrid
Appendix A. Revision History
Appendix A. Revision History Revision History Revision 9-2 Wed Mar 15 2017 Mirek Jahoda Version for 6.9 GA publication. Revision 9-1 Tue Jan 3 2017 Mirek Jahoda Version for 6.9 Beta publication. Revision 7-4 Wed May 3 2016 Robert Kratky Release of the SELinux Guide for Red Hat Enterprise Linux 6.8 GA. Revision 7-1 Thu Jul 9 2015 Barbora Ancincova Release of the SELinux Guide for Red Hat Enterprise Linux 6.7 GA. Revision 6-0 Fri Oct 10 2014 Barbora Ancincova Release of the SELinux Guide for Red Hat Enterprise Linux 6.6 GA. Revision 5-0 Fri Sep 12 2014 Barbora Ancincova Release of the SELinux Guide for Red Hat Enterprise Linux 6.5 GA. Revision 4-0 Feb Fri 22 2013 Tomas Capek Release of the SELinux Guide for Red Hat Enterprise Linux 6.4 GA. Revision 3-0 Wed Jun 20 2012 Martin Prpic Release of the SELinux Guide for Red Hat Enterprise Linux 6.3 GA. Revision 2-0 Tue Dec 6 2011 Martin Prpic Release of the SELinux Guide for Red Hat Enterprise Linux 6.2 GA. Revision 1.9-0 Wed Mar 3 2010 Scott Radvan Revision for Red Hat Enterprise Linux 6
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/appe-security-enhanced_linux-revision_history
E.2.26. /proc/swaps
E.2.26. /proc/swaps This file measures swap space and its utilization. For a system with only one swap partition, the output of /proc/swaps may look similar to the following: While some of this information can be found in other files in the /proc/ directory, /proc/swap provides a snapshot of every swap file name, the type of swap space, the total size, and the amount of space in use (in kilobytes). The priority column is useful when multiple swap files are in use. The lower the priority, the more likely the swap file is to be used.
[ "Filename Type Size Used Priority /dev/mapper/VolGroup00-LogVol01 partition 524280 0 -1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-swaps
Chapter 5. Validating your OpenStack cloud with the Integration Test Suite (tempest)
Chapter 5. Validating your OpenStack cloud with the Integration Test Suite (tempest) You can run Integration Test Suite validations in many ways with the tempest run command. You can also combine multiple options in a single tempest run command. 5.1. Prerequisites An OpenStack environment that contains the Integration Test Suite packages. An Integration Test Suite configuration that corresponds to your OpenStack environment. For more information, see Creating a workspace . 5.2. Listing available tests Use the --list-tests option to list all available tests. Procedure Enter the tempest run command with either the --list-tests or -l options to get a list of available tempest tests: 5.3. Running smoke tests Smoke testing is a type of preliminary testing which covers only the most important functionality. Although these tests are not comprehensive, running smoke tests can save time if they do identify a problem. Procedure Enter the tempest run command with the --smoke option: 5.4. Passing tests by using allowlist files An allowlist file is a file that contains regular expressions to select tests that you want to include. If you use one or more regular expressions, specify each expression on a separate line. Procedure Enter the tempest run command with either the --whitelist-file or -w options to use an allowlist file: 5.5. Skipping tests by using blocklist files A blocklist file is a file that contains regular expressions to select tests that you want to exclude. If you use one or more regular expressions, specify each expression on a separate line. Procedure Enter the tempest run command with either the --blacklist-file or -b options to use a blocklist file: 5.6. Running tests in parallel or in series You can run tests in parallel, or in series. You can also define the number of workers that you want to use when you run parallel tests. By default, the Integration Test Suite uses one worker for each CPU available. Choose to run the tests serially or in parallel: Run the tests serially: Run the tests in parallel (default): Use the --concurrency or -c option to specify the number of workers to use when you run tests in parallel: 5.7. Running specific tests Run specific tests with the --regex option. The regular expression must be Python regular expression: Procedure Enter the following command: For example, use the following example command to run all tests that have names that begin with tempest.scenario : 5.8. Deleting Integration Test Suite objects Enter the tempest cleanup command to delete all Integration Test Suite (tempest) resources. This command also deletes projects, but the command does not delete the administrator account: Procedure Delete the tempest resources:
[ "tempest run -l", "tempest run --smoke", "tempest run -w <whitelist_file>", "tempest run -b <blacklist_file>", "tempest run --serial", "tempest run --parallel", "tempest run --concurrency <workers>", "tempest run --regex <regex>", "tempest run --regex ^tempest.scenario", "tempest cleanup --delete-tempest-conf-objects" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/openstack_integration_test_suite_guide/assembly_validating-your-openstack-cloud-with-the-integration-test-suite-tempest_tempest
8.4.6. Configuration Overview: Remote Node
8.4.6. Configuration Overview: Remote Node This section provides a high-level summary overview of the steps to perform to configure a Pacemaker remote node and to integrate that node into an existing Pacemaker cluster environment. On the node that you will be configuring as a remote node, allow cluster-related services through the local firewall. Note If you are using iptables directly, or some other firewall solution besides firewalld , simply open the following ports, which can be used by various clustering components: TCP ports 2224, 3121, and 21064, and UDP port 5405. Install the pacemaker_remote daemon on the remote node. All nodes (both cluster nodes and remote nodes) must have the same authentication key installed for the communication to work correctly. If you already have a key on an existing node, use that key and copy it to the remote node. Otherwise, create a new key on the remote node. Run the following set of commands on the remote node to create a directory for the authentication key with secure permissions. The following command shows one method to create an encryption key on the remote node. Start and enable the pacemaker_remote daemon on the remote node. On the cluster node, create a location for the shared authentication key with the same path as the authentication key on the remote node and copy the key into that directory. In this example, the key is copied from the remote node where the key was created. Run the following command from a cluster node to create a remote resource. In this case the remote node is remote1 . After creating the remote resource, you can treat the remote node just as you would treat any other node in the cluster. For example, you can create a resource and place a resource constraint on the resource to run on the remote node as in the following commands, which are run from a cluster node. Warning Never involve a remote node connection resource in a resource group, colocation constraint, or order constraint. Configure fencing resources for the remote node. Remote nodes are fenced the same way as cluster nodes. Configure fencing resources for use with remote nodes the same as you would with cluster nodes. Note, however, that remote nodes can never initiate a fencing action. Only cluster nodes are capable of actually executing a fencing operation against another node.
[ "firewall-cmd --permanent --add-service=high-availability success firewall-cmd --reload success", "yum install -y pacemaker-remote resource-agents pcs", "mkdir -p --mode=0750 /etc/pacemaker chgrp haclient /etc/pacemaker", "dd if=/dev/urandom of=/etc/pacemaker/authkey bs=4096 count=1", "systemctl enable pacemaker_remote.service systemctl start pacemaker_remote.service", "mkdir -p --mode=0750 /etc/pacemaker chgrp haclient /etc/pacemaker scp remote1:/etc/pacemaker/authkey /etc/pacemaker/authkey", "pcs resource create remote1 ocf:pacemaker:remote", "pcs resource create webserver apache configfile=/etc/httpd/conf/httpd.conf op monitor interval=30s pcs constraint location webserver prefers remote1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/remotenode_config
Chapter 7. Scheduling Windows container workloads
Chapter 7. Scheduling Windows container workloads You can schedule Windows workloads to Windows compute nodes. Note The WMCO is not supported in clusters that use a cluster-wide proxy because the WMCO is not able to route traffic through the proxy connection for the workloads. Prerequisites You installed the Windows Machine Config Operator (WMCO) using Operator Lifecycle Manager (OLM). You are using a Windows container as the OS image. You have created a Windows compute machine set. 7.1. Windows pod placement Before deploying your Windows workloads to the cluster, you must configure your Windows node scheduling so pods are assigned correctly. Since you have a machine hosting your Windows node, it is managed the same as a Linux-based node. Likewise, scheduling a Windows pod to the appropriate Windows node is completed similarly, using mechanisms like taints, tolerations, and node selectors. With multiple operating systems, and the ability to run multiple Windows OS variants in the same cluster, you must map your Windows pods to a base Windows OS variant by using a RuntimeClass object. For example, if you have multiple Windows nodes running on different Windows Server container versions, the cluster could schedule your Windows pods to an incompatible Windows OS variant. You must have RuntimeClass objects configured for each Windows OS variant on your cluster. Using a RuntimeClass object is also recommended if you have only one Windows OS variant available in your cluster. For more information, see Microsoft's documentation on Host and container version compatibility . Also, it is recommended that you set the spec.os.name.windows parameter in your workload pods. The Windows Machine Config Operator (WMCO) uses this field to authoritatively identify the pod operating system for validation and is used to enforce Windows-specific pod security context constraints (SCCs). Currently, this parameter has no effect on pod scheduling. For more information about this parameter, see the Kubernetes Pods documentation . Important The container base image must be the same Windows OS version and build number that is running on the node where the conainer is to be scheduled. Also, if you upgrade the Windows nodes from one version to another, for example going from 20H2 to 2022, you must upgrade your container base image to match the new version. For more information, see Windows container version compatibility . Additional resources Controlling pod placement using the scheduler Controlling pod placement using node taints Placing pods on specific nodes using node selectors 7.2. Creating a RuntimeClass object to encapsulate scheduling mechanisms Using a RuntimeClass object simplifies the use of scheduling mechanisms like taints and tolerations; you deploy a runtime class that encapsulates your taints and tolerations and then apply it to your pods to schedule them to the appropriate node. Creating a runtime class is also necessary in clusters that support multiple operating system variants. Procedure Create a RuntimeClass object YAML file. For example, runtime-class.yaml : apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: windows2019 1 handler: 'runhcs-wcow-process' scheduling: nodeSelector: 2 kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations: 3 - effect: NoSchedule key: os operator: Equal value: "windows" - effect: NoSchedule key: os operator: Equal value: "Windows" 1 Specify the RuntimeClass object name, which is defined in the pods you want to be managed by this runtime class. 2 Specify labels that must be present on nodes that support this runtime class. Pods using this runtime class can only be scheduled to a node matched by this selector. The node selector of the runtime class is merged with the existing node selector of the pod. Any conflicts prevent the pod from being scheduled to the node. For Windows 2019, specify the node.kubernetes.io/windows-build: '10.0.17763' label. For Windows 2022, specify the node.kubernetes.io/windows-build: '10.0.20348' label. 3 Specify tolerations to append to pods, excluding duplicates, running with this runtime class during admission. This combines the set of nodes tolerated by the pod and the runtime class. Create the RuntimeClass object: USD oc create -f <file-name>.yaml For example: USD oc create -f runtime-class.yaml Apply the RuntimeClass object to your pod to ensure it is scheduled to the appropriate operating system variant: apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: windows2019 1 # ... 1 Specify the runtime class to manage the scheduling of your pod. 7.3. Sample Windows container workload deployment You can deploy Windows container workloads to your cluster once you have a Windows compute node available. Note This sample deployment is provided for reference only. Example Service object apiVersion: v1 kind: Service metadata: name: win-webserver labels: app: win-webserver spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: win-webserver type: LoadBalancer Example Deployment object apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver name: win-webserver spec: selector: matchLabels: app: win-webserver replicas: 1 template: metadata: labels: app: win-webserver name: win-webserver spec: containers: - name: windowswebserver image: mcr.microsoft.com/windows/servercore:ltsc2019 1 imagePullPolicy: IfNotPresent command: - powershell.exe 2 - -command - USDlistener = New-Object System.Net.HttpListener; USDlistener.Prefixes.Add('http://*:80/'); USDlistener.Start();Write-Host('Listening at http://*:80/'); while (USDlistener.IsListening) { USDcontext = USDlistener.GetContext(); USDresponse = USDcontext.Response; USDcontent='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; USDbuffer = [System.Text.Encoding]::UTF8.GetBytes(USDcontent); USDresponse.ContentLength64 = USDbuffer.Length; USDresponse.OutputStream.Write(USDbuffer, 0, USDbuffer.Length); USDresponse.Close(); }; securityContext: runAsNonRoot: false windowsOptions: runAsUserName: "ContainerAdministrator" os: name: "windows" runtimeClassName: windows2019 3 1 Specify the container image to use: mcr.microsoft.com/powershell:<tag> or mcr.microsoft.com/windows/servercore:<tag> . The container image must match the Windows version running on the node. For Windows 2019, use the ltsc2019 tag. For Windows 2022, use the ltsc2022 tag. 2 Specify the commands to execute on the container. For the mcr.microsoft.com/powershell:<tag> container image, you must define the command as pwsh.exe . For the mcr.microsoft.com/windows/servercore:<tag> container image, you must define the command as powershell.exe . 3 Specify the runtime class you created for the Windows operating system variant on your cluster. 7.4. Scaling a compute machine set manually To add or remove an instance of a machine in a compute machine set, you can manually scale the compute machine set. This guidance is relevant to fully automated, installer-provisioned infrastructure installations. Customized, user-provisioned infrastructure installations do not have compute machine sets. Prerequisites Install an OpenShift Container Platform cluster and the oc command line. Log in to oc as a user with cluster-admin permission. Procedure View the compute machine sets that are in the cluster by running the following command: USD oc get machinesets -n openshift-machine-api The compute machine sets are listed in the form of <clusterid>-worker-<aws-region-az> . View the compute machines that are in the cluster by running the following command: USD oc get machine -n openshift-machine-api Set the annotation on the compute machine that you want to delete by running the following command: USD oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine="true" Scale the compute machine set by running one of the following commands: USD oc scale --replicas=2 machineset <machineset> -n openshift-machine-api Or: USD oc edit machineset <machineset> -n openshift-machine-api Tip You can alternatively apply the following YAML to scale the compute machine set: apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2 You can scale the compute machine set up or down. It takes several minutes for the new machines to be available. Important By default, the machine controller tries to drain the node that is backed by the machine until it succeeds. In some situations, such as with a misconfigured pod disruption budget, the drain operation might not be able to succeed. If the drain operation fails, the machine controller cannot proceed removing the machine. You can skip draining the node by annotating machine.openshift.io/exclude-node-draining in a specific machine. Verification Verify the deletion of the intended machine by running the following command: USD oc get machines
[ "apiVersion: node.k8s.io/v1 kind: RuntimeClass metadata: name: windows2019 1 handler: 'runhcs-wcow-process' scheduling: nodeSelector: 2 kubernetes.io/os: 'windows' kubernetes.io/arch: 'amd64' node.kubernetes.io/windows-build: '10.0.17763' tolerations: 3 - effect: NoSchedule key: os operator: Equal value: \"windows\" - effect: NoSchedule key: os operator: Equal value: \"Windows\"", "oc create -f <file-name>.yaml", "oc create -f runtime-class.yaml", "apiVersion: v1 kind: Pod metadata: name: my-windows-pod spec: runtimeClassName: windows2019 1", "apiVersion: v1 kind: Service metadata: name: win-webserver labels: app: win-webserver spec: ports: # the port that this service should serve on - port: 80 targetPort: 80 selector: app: win-webserver type: LoadBalancer", "apiVersion: apps/v1 kind: Deployment metadata: labels: app: win-webserver name: win-webserver spec: selector: matchLabels: app: win-webserver replicas: 1 template: metadata: labels: app: win-webserver name: win-webserver spec: containers: - name: windowswebserver image: mcr.microsoft.com/windows/servercore:ltsc2019 1 imagePullPolicy: IfNotPresent command: - powershell.exe 2 - -command - USDlistener = New-Object System.Net.HttpListener; USDlistener.Prefixes.Add('http://*:80/'); USDlistener.Start();Write-Host('Listening at http://*:80/'); while (USDlistener.IsListening) { USDcontext = USDlistener.GetContext(); USDresponse = USDcontext.Response; USDcontent='<html><body><H1>Red Hat OpenShift + Windows Container Workloads</H1></body></html>'; USDbuffer = [System.Text.Encoding]::UTF8.GetBytes(USDcontent); USDresponse.ContentLength64 = USDbuffer.Length; USDresponse.OutputStream.Write(USDbuffer, 0, USDbuffer.Length); USDresponse.Close(); }; securityContext: runAsNonRoot: false windowsOptions: runAsUserName: \"ContainerAdministrator\" os: name: \"windows\" runtimeClassName: windows2019 3", "oc get machinesets -n openshift-machine-api", "oc get machine -n openshift-machine-api", "oc annotate machine/<machine_name> -n openshift-machine-api machine.openshift.io/delete-machine=\"true\"", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 2", "oc get machines" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/windows_container_support_for_openshift/scheduling-windows-workloads
Chapter 1. Overview
Chapter 1. Overview 1.1. Major changes in RHEL 9.2 Installer and image creation Key highlights for image builder: Image builder on-prem now offers a new and improved way to create blueprints and images in the image builder web console. Creating customized files and directories in the /etc directory is now supported. The RHEL for Edge Simplified Installer image type is now available in the image builder web console. For more information, see New features - Installer and image creation . RHEL for Edge Key highlights for RHEL for Edge: Specifying a user in a blueprint for simplified-installer images is now supported. The Ignition provisioning utility is now supported in RHEL for Edge Simplified images. Simplified Installer images can now be composed without the FDO customization section in the blueprint. For more information, see New features - RHEL for Edge . Security Key security-related highlights: The OpenSSL secure communications library was rebased to version 3.0.7. SELinux user-space packages were updated to version 3.5. Keylime was rebased to version 6.5.2 OpenSCAP was rebased to version 1.3.7. SCAP Security Guide was rebased to version 0.1.66. A new rule for idle session termination was added to the SCAP Security Guide. Clevis now accepts external tokens. Rsyslog TLS-encrypted logging now supports multiple CA files. Rsyslog privileges are limited to minimize security exposure. The fapolicyd framework now provides filtering of the RPM database. See New features - Security for more information. Dynamic programming languages, web and database servers Later versions of the following Application Streams are now available: Python 3.11 nginx 1.22 PostgreSQL 15 The following components have been upgraded: Git to version 2.39.1 Git LFS to version 3.2.0 See New features - Dynamic programming languages, web and database servers for more information. Compilers and development tools Updated system toolchain The following system toolchain components have been updated in RHEL 9.2: GCC 11.3.1 glibc 2.34 binutils 2.35.2 Updated performance tools and debuggers The following performance tools and debuggers have been updated in RHEL 9.2: GDB 10.2 Valgrind 3.19 SystemTap 4.8 Dyninst 12.1.0 elfutils 0.188 Updated performance monitoring tools The following performance monitoring tools have been updated in RHEL 9.2: PCP 6.0.1 Grafana 9.0.9 Updated compiler toolsets The following compiler toolsets have been updated in RHEL 9.2: GCC Toolset 12 LLVM Toolset 15.0.7 Rust Toolset 1.66 Go Toolset 1.19.6 For detailed changes, see New features - Compilers and development tools . Java implementations in RHEL 9 The RHEL 9 AppStream repository includes: The java-17-openjdk packages, which provide the OpenJDK 17 Java Runtime Environment and the OpenJDK 17 Java Software Development Kit. The java-11-openjdk packages, which provide the OpenJDK 11 Java Runtime Environment and the OpenJDK 11 Java Software Development Kit. The java-1.8.0-openjdk packages, which provide the OpenJDK 8 Java Runtime Environment and the OpenJDK 8 Java Software Development Kit. The Red Hat build of OpenJDK packages share a single set of binaries between its portable Linux releases and RHEL 9.2 and later releases. With this update, there is a change in the process of rebuilding the OpenJDK packages on RHEL from the source RPM. For more information about the new rebuilding process, see the README.md file which is available in the SRPM package of the Red Hat build of OpenJDK and is also installed by the java-*-openjdk-headless packages under the /usr/share/doc tree. For more information, see OpenJDK documentation . The web console The RHEL web console now performs additional steps for binding LUKS-encrypted root volumes to NBDE deployments. You can also apply the following cryptographic subpolicies through the graphical interface now: DEFAULT:SHA1 , LEGACY:AD-SUPPORT , and FIPS:OSPP . See New features - The web console for more information. Containers Notable changes include: The podman RHEL System Role is now available. Clients for sigstore signatures with Fulcio and Rekor are now available. Skopeo now supports generating sigstore key pairs. Podman now supports events for auditing. The Container Tools packages have been updated. The Aardvark and Netavark networks stack now supports custom DNS server selection. Toolbox is now available. Podman Quadlet is now available as a Technology Preview. The CNI network stack has been deprecated. See New features - Containers for more information. 1.2. In-place upgrade In-place upgrade from RHEL 8 to RHEL 9 The supported in-place upgrade paths currently are: From RHEL 8.6 to RHEL 9.0 and RHEL 8.8 to RHEL 9.2 on the following architectures: 64-bit Intel 64-bit AMD 64-bit ARM IBM POWER 9 (little endian) IBM Z architectures, excluding z13 From RHEL 8.6 to RHEL 9.0 and RHEL 8.8 to RHEL 9.2 on systems with SAP HANA For more information, see Supported in-place upgrade paths for Red Hat Enterprise Linux . For instructions on performing an in-place upgrade, see Upgrading from RHEL 8 to RHEL 9 . If you are upgrading to RHEL 9.2 with SAP HANA, ensure that the system is certified for SAP prior to the upgrade. For instructions on performing an in-place upgrade on systems with SAP environments, see How to in-place upgrade SAP environments from RHEL 8 to RHEL 9 . Notable enhancements include: The RHEL in-place upgrade path strategy has changed. For more information, see Supported in-place upgrade paths for Red Hat Enterprise Linux . With the release of RHEL 9.2, multiple upgrade paths are now available for the in-place upgrade from RHEL 8 to RHEL 9. For the current release, it is possible to perform an in-place upgrade from either RHEL 8.8 to RHEL 9.2, or RHEL 8.6 to RHEL 9.0.Note that the available upgrade paths differ between standard RHEL systems and RHEL systems with SAP HANA. The latest release of the leapp-upgrade-el8toel9 package now contains all required leapp data files. Customers no longer need to manually download these data files. In-place upgrades of RHEL 8.8 systems in FIPS mode are now supported. In-place upgrades using an ISO image that contains the target version are now possible. RPM signatures are now automatically checked during the in-place upgrade. To disable the automatic check, use the --nogpgcheck option when performing the upgrade. Systems that are subscribed to RHSM are now automatically registered with Red Hat Insights. To disable the automatic registration, set the LEAPP_NO_INSIGHTS_REGISTER environment variable to 1 . Red Hat now collects upgrade-related data, such as the upgrade start and end times and whether the upgrade was successful, for utility usage analysis. To disable data collection, set the LEAPP_NO_RHSM_FACTS environment variable to 1 . In-place upgrade from RHEL 7 to RHEL 9 It is not possible to perform an in-place upgrade directly from RHEL 7 to RHEL 9. However, you can perform an in-place upgrade from RHEL 7 to RHEL 8 and then perform a second in-place upgrade to RHEL 9. For more information, see Upgrading from RHEL 7 to RHEL 8 . 1.3. Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Registration Assistant Kickstart Generator Red Hat Product Certificates Red Hat CVE Checker Kernel Oops Analyzer Red Hat Code Browser VNC Configurator Red Hat OpenShift Container Platform Update Graph Red Hat Satellite Upgrade Helper JVM Options Configuration Tool Load Balancer Configuration Tool Red Hat OpenShift Data Foundation Supportability and Interoperability Checker Ansible Automation Platform Upgrade Assistant Ceph Placement Groups (PGs) per Pool Calculator Red Hat Out of Memory Analyzer 1.4. Additional resources Capabilities and limits of Red Hat Enterprise Linux 9 as compared to other versions of the system are available in the Knowledgebase article Red Hat Enterprise Linux technology capabilities and limits . Information regarding the Red Hat Enterprise Linux life cycle is provided in the Red Hat Enterprise Linux Life Cycle document. The Package manifest document provides a package listing for RHEL 9, including licenses and application compatibility levels. Application compatibility levels are explained in the Red Hat Enterprise Linux 9: Application Compatibility Guide document. Major differences between RHEL 8 and RHEL 9 , including removed functionality, are documented in Considerations in adopting RHEL 9 . Instructions on how to perform an in-place upgrade from RHEL 8 to RHEL 9 are provided by the document Upgrading from RHEL 8 to RHEL 9 . The Red Hat Insights service, which enables you to proactively identify, examine, and resolve known technical issues, is available with all RHEL subscriptions. For instructions on how to install the Red Hat Insights client and register your system to the service, see the Red Hat Insights Get Started page.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.2_release_notes/overview
function::sprint_stack
function::sprint_stack Name function::sprint_stack - Return stack for kernel addresses from string Synopsis Arguments stk String with list of hexadecimal (kernel) addresses Description Perform a symbolic lookup of the addresses in the given string, which is assumed to be the result of a prior call to backtrace . Returns a simple backtrace from the given hex string. One line per address. Includes the symbol name (or hex address if symbol couldn't be resolved) and module name (if found). Includes the offset from the start of the function if found, otherwise the offset will be added to the module (if found, between brackets). Returns the backtrace as string (each line terminated by a newline character). Note that the returned stack will be truncated to MAXSTRINGLEN, to print fuller and richer stacks use print_stack. NOTE it is recommended to use sprint_syms instead of this function.
[ "sprint_stack:string(stk:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sprint-stack
Chapter 20. Debugging a Running Application
Chapter 20. Debugging a Running Application This chapter will introduce the techniques for debugging an application which can be executed as many times as needed, on a machine directly accessible to the developer. 20.1. Enabling Debugging with Debugging Information To debug applications and libraries, debugging information is required. The following sections describe how to obtain this information. 20.1.1. Debugging Information While debugging any executable code, two kinds of information allow the tools and by extension the programmer to comprehend the binary code: The source code text A description of how the source code text relates to the binary code This is referred to as debugging information. Red Hat Enterprise Linux uses the ELF format for executable binaries, shared libraries, or debuginfo files. Within these ELF files, the DWARF format is used to hold the debug information. DWARF symbols are read by the readelf -w file command. Caution STABS is occasionally used with UNIX. STABS is an older, less capable format. Its use is discouraged by Red Hat. GCC and GDB support the STABS production and consumption on a best effort basis only. Some other tools such as Valgrind and elfutils do not support STABS at all. Additional Resources The DWARF Debugging Standard 20.1.2. Enabling Debugging of C and C++ Applications with GCC Because debugging information is large, it is not included in executable files by default. To enable debugging of your C and C++ applications with it, you must explicitly instruct the compiler to create debugging information. Enabling the Creation of Debugging Information with GCC To enable the creation of debugging information with GCC when compiling and linking code, use the -g option: Optimizations performed by the compiler and linker can result in executable code which is hard to relate to the original source code: variables may be optimized out, loops unrolled, operations merged into the surrounding ones etc. This affects debugging negatively. For an improved debugging experience, consider setting the optimization with the -Og option. However, changing the optimization level changes the executable code and may change the actual behaviour so as to remove some bugs. The -fcompare-debug GCC option tests code compiled by GCC with debug information and without debug information. The test passes if the resulting two binary files are identical. This test ensures that executable code is not affected by any debugging options, which further ensures that there are no hidden bugs in the debug code. Note that using the -fcompare-debug option significantly increases compilation time. See the GCC manual page for details about this option. Additional Resources Section 20.1, "Enabling Debugging with Debugging Information" Using the GNU Compiler Collection (GCC) - 3.10 Options for Debugging Your Program Debugging with GDB - 18.3 Debugging Information in Separate Files The GCC manual page: 20.1.3. Debuginfo Packages Debuginfo packages contain debugging information and debug source code for programs and libraries. Prerequisites Understanding of debugging information Debuginfo Packages For applications and libraries installed in packages from the Red Hat Enterprise Linux repositories, you can obtain the debugging information and debug source code as separate debuginfo packages available through another channel. The debuginfo packages contain .debug files, which contain DWARF debuginfo and the source files used for compiling the binary packages. Debuginfo package contents are installed to the /usr/lib/debug directory. A debuginfo package provides debugging information valid only for a binary package with the same name, version, release, and architecture: Binary package: packagename - version - release . architecture .rpm Debuginfo package: packagename -debuginfo- version - release . architecture .rpm 20.1.4. Getting debuginfo Packages for an Application or Library using GDB The GNU Debugger (GDB) automatically recognizes missing debug information and resolves the package name. Prerequisites The application or library you want to debug is installed on the system GDB is installed on the system The debuginfo-install tool is installed on the system Procedure Start GDB attached to the application or library you want to debug. GDB automatically recognizes missing debugging information and suggests a command to run. Exit GDB without proceeding further: type q and Enter . Run the command suggested by GDB to install the needed debuginfo packages: Installing a debuginfo package for an application or library installs debuginfo packages for all dependencies, too. In case GDB is not able to suggest the debuginfo package, follow the procedure in Section 20.1.5, "Getting debuginfo Packages for an Application or Library Manually" . Additional Resources Red Hat Developer Toolset User Guide - Installing Debugging Information Red Hat Knowledgebase solution - How can I download or install debuginfo packages for RHEL systems? 20.1.5. Getting debuginfo Packages for an Application or Library Manually To manually choose (which) debuginfo packages (to install) for installation, locate the executable file and find the package which installs it. Note The use of GDB to determine the packages for installation is preferable. Use this manual procedure only if GDB is not able to suggest the package to install. Prerequisites The application or library must be installed on the system The debuginfo -install tool must be available on the system Procedure Find the executable file of the application or library. Use the which command to find the application file. Use the locate command to find the library file. If the original reasons for debugging included error messages, pick the result where the library has the same additional numbers in its file name. If in doubt, try following the rest of the procedure with the result where the library file name includes no additional numbers. Note The locate command is provided by the mlocate package. To install it and enable its use: Using the file path, search for a package which provides that file. The output provides a list of packages in the format name - version . distribution . architecture . In this step, only the package name is important, because the version shown in yum output may not be the actual installed version. Important If this step does not produce any results, it is not possible to determine which package provided the binary file and this procedure fails. Use the rpm low-level package management tool to find what package version is installed on the system. Use the package name as an argument: The output provides details for the installed package in the format name - version . distribution . architecture . Install the debuginfo packages using the debuginfo-install utility. In the command, use the package name and other details you determined during the step: Installing a debuginfo package for an application or library installs debuginfo packages for all dependencies, too. Additional Resources Red Hat Developer Toolset User Guide - Installing Debugging Information Knowledgebase article - How can I download or install debuginfo packages for RHEL systems? 20.2. Inspecting the Application's Internal State with GDB To find why an application does not work properly, control its execution and examine its internal state with a debugger. This section describes how to use the GNU Debugger (GDB) for this task. 20.2.1. GNU Debugger (GDB) A debugger is a tool that enables control of code execution and inspection of the state of the code. This capability is used to investigate what is happening in a program and why. Red Hat Enterprise Linux contains the GNU debugger (GDB) which offers this functionality through a command line user interface. For a graphical frontend to GDB, install the Eclipse integrated development environment. See Using Eclipse . GDB Capabilities A single GDB session can debug: multithreaded and forking programs multiple programs at once programs on remote machines or in containers with the gdbserver utility connected over a TCP/IP network connection Debugging Requirements To debug any executable code, GDB requires the respective debugging information: For programs developed by you, you can create the debugging information while building the code. For system programs installed from packages, their respective debuginfo packages must be installed. 20.2.2. Attaching GDB to a Process In order to examine a process, GDB must be attached to the process. Prerequisites GDB must be installed on the system Starting a Program with GDB When the program is not running as a process, start it with GDB: Replace program with a file name or path to the program. GDB starts execution of the program. You can set up breakpoints and the gdb environment before beginning the execution of the process with the run command. Attaching GDB to an Already Running Process To attach GDB to a program already running as a process: Find the process id ( pid ) with the ps command: Replace program with a file name or path to the program. Attach GDB to this process: Replace program with a file name or path to the program, replace pid with an actual process id number from the ps output. Attaching an Already Running GDB to an Already Running Process To attach an already running GDB to an already running program: Use the shell GDB command to run the ps command and find the program's process id ( pid ): Replace program with a file name or path to the program. Use the attach command to attach GDB to the program: Replace pid by an actual process id number from the ps output. Note In some cases, GDB might not be able to find the respective executable file. Use the file command to specify the path: Additional Resources Debugging with GDB - 2.1 Invoking GDB Debugging with GDB - 4.7 Debugging an Already-running Process 20.2.3. Stepping through Program Code with GDB Once the GDB debugger is attached to a program, you can use a number of commands to control the execution of the program. Prerequisites GDB must be installed on the system You must have the required debugging information available: The program is compiled and built with debugging information, or The relevant debuginfo packages are installed GDB is attached to the program that is to be debugged GDB Commands to Step Through the Code r (run) Start the execution of the program. If run is executed with arguments, those arguments are passed on to the executable as if the program was started normally. Users normally issue this command after setting breakpoints. start Start the execution of the program and stop at the beginning of the main function. If start is executed with arguments, those arguments are passed on to the executable as if the program was started normally. c (continue) Continue the execution of the program from the current state. The execution of the program will continue until one of the following becomes true: A breakpoint is reached A specified condition is satisfied A signal is received by the program An error occurs The program terminates n () Another commonly known name of this command is step over . Continue the execution of the program from the current state, until the line of code in the current source file is reached. The execution of the program will continue until one of the following becomes true: A breakpoint is reached A specified condition is satisfied A signal is received by the program An error occurs The program terminates s (step) Another commonly known name of this command is step into . The step command halts execution at each sequential line of code in the current source file. However, if the execution is currently stopped at a source line containing a function call , GDB stops the execution after entering the function call (rather than executing it). until location Continue the execution until the code location specified by the location option is reached. fini (finish) Resume the execution of the program and halt when the execution returns from a function. The execution of the program will continue until one of the following becomes true: A breakpoint is reached A specified condition is satisfied A signal is received by the program An error occurs The program terminates q (quit) Terminate the execution and exit GDB. Additional Resources Section 20.2.5, "Using GDB Breakpoints to Stop Execution at Defined Code Locations" Debugging with GDB - 4.2 Starting your Program Debugging with GDB - 5.2 Continuing and Stepping 20.2.4. Showing Program Internal Values with GDB Displaying the values of the internal variables of a program is important for understanding what the program is doing. GDB offers multiple commands that you can use to inspect the internal variables. This section describes the most useful of these commands. Prerequisites Understanding of the GDB debugger GDB Commands to Display the Internal State of a Program p (print) Displays the value of the argument given. Usually, the argument is the name of a variable of any complexity, from a simple single value to a structure. An argument can also be an expression valid in the current language, including the use of program variables and library functions, or functions defined in the program being tested. It is possible to extend GDB with pretty-printer Python or Guile scripts for customized display of data structures (such as classes, structs) using the print command. bt (backtrace) Display the chain of function calls used to reach the current execution point, or the chain of functions used up until execution was signalled. This is useful for investigating serious bugs (such as segmentation faults) with elusive causes. Adding the full option to the backtrace command displays local variables, too. It is possible to extend GDB with frame filter Python scripts for customized display of data displayed using the bt and info frame commands. The term frame refers to the data associated with a single function call. info The info command is a generic command to provide information about various items. It takes an option specifying the item. The info args command displays arguments of the function call that is the currently selected frame. The info locals command displays local variables in the currently selected frame. For a list of the possible items, run the command help info in a GDB session: l (list) Show the line in the source code where the program stopped. This command is available only when the program execution is stopped. While not strictly a command to show the internal state, list helps the user understand what changes to the internal state will happen in the step of the program's execution. Additional Resources Red Hat Developer blog entry - The GDB Python API Debugging with GDB - 10.9 Pretty Printing 20.2.5. Using GDB Breakpoints to Stop Execution at Defined Code Locations In many cases, it is advantageous to let the program execute until a certain line of code is reached. Prerequisites Understanding of GDB Using Breakpoints in GDB Breakpoints are markers that tell GDB to stop the execution of a program. Breakpoints are most commonly associated with source code lines: Placing a breakpoint requires specifying the source file and line number. To place a breakpoint : Specify the name of the source code file and the line in that file: When file is not present, the name of the source file at the current point of execution is used: Alternatively, use a function name to place the breakpoint: A program might encounter an error after a certain number of iterations of a task. To specify an additional condition to halt execution: Replace condition with a condition in the C or C++ language. The meaning of file and line is the same as above. To inspect the status of all breakpoints and watchpoints: To remove a breakpoint by using its number as displayed in the output of info br : To remove a breakpoint at a given location: 20.2.6. Using GDB Watchpoints to Stop Execution on Data Access and Changes In many cases, it is advantageous to let the program execute until certain data changes or is accessed. This section lists the most common watchpoints. Prerequisites Understanding of GDB Using Watchpoints in GDB Watchpoints are markers which tell GDB to stop the execution of a program. Watchpoints are associated with data: Placing a watchpoint requires specifying an expression describing a variable, multiple variables, or a memory address. To place a watchpoint for data change (write): Replace expression with an expression that describes what you want to watch. For variables, expression is equal to the name of the variable. To place a watchpoint for data access (read): To place a watchpoint for any data access (both read and write): To inspect the status of all watchpoints and breakpoints: To remove a watchpoint: Replace the num option with the number reported by the info br command. 20.2.7. Debugging Forking or Threaded Programs with GDB Some programs use forking or threads to achieve parallel code execution. Debugging multiple simultaneous execution paths requires special considerations. Prerequisites Understanding of the GDB debugger Understanding of the concepts of process forking and threads Debugging Forked Programs with GDB Forking is a situation when a program ( parent ) creates an independent copy of itself ( child ). Use the following settings and commands to affect the reaction of GDB to an occuring fork: The follow-fork-mode setting controls whether GDB follows the parent or the child after the fork. set follow-fork-mode parent After forking, debug the parent process. This is the default. set follow-fork-mode child After forking, debug the child process. show follow-fork-mode Displays the current setting of the follow-fork-mode . The set detach-on-fork setting controls whether the GDB keeps control of the other (not followed) process or leaves it to run. set detach-on-fork on The process which is not followed (depending on the value of the follow-fork-mode ) is detached and runs independently. This is the default. set detach-on-fork off GDB keeps control of both processes. The process which is followed (depending on the value of follow-fork-mode ) is debugged as usual, while the other is suspended. show detach-on-fork Displays the current setting of detach-on-fork . Debugging Threaded Programs with GDB GDB has the ability to debug individual threads, and to manipulate and examine them independently. To make GDB stop only the thread that is examined, use the commands set non-stop on and set target-async on . You can add these commands to the .gdbinit file. After that functionality is turned on, GDB is ready to conduct thread debugging. GDB uses the concept of current thread . By default, commands apply to the current thread only. info threads Displays a list of threads with their id and gid numbers, indicating the current thread. thread id Sets the thread with the specified id as the current thread. thread apply ids command Applies the command command to all threads listed by ids . The ids option is a space-separated list of thread ids. The special value all applies the command to all threads. break location thread id if condition Sets a breakpoint at a certain location with a certain condition only for the thread number id . watch expression thread id Sets a watchpoint defined by expression only for the thread number id . command& Executes the command command and returns immediately to the GDB prompt (gdb) , continuing code execution in the background. interrupt Halts execution in the background. Additional Resources Debugging with GDB - 4.10 Debugging Programs with Multiple Threads Debugging with GDB - 4.11 Debugging Forks 20.3. Recording Application Interactions The executable code of applications interacts with the code of the operating system and shared libraries. Recording an activity log of these interactions can provide enough insight into the application's behavior without debugging the actual application code. Alternatively, analyzing an application's interactions can help to pinpoint the conditions in which a bug manifests. 20.3.1. Useful Tools for Recording Application Interactions Red Hat Enterprise Linux offers multiple tools for analyzing an application's interactions. strace The strace tool enables the tracing of (and tampering with) interactions between an application and the Linux kernel: system calls, signal deliveries, and changes of process state. The strace output is detailed and explains the calls well, because strace interprets parameters and results with knowledge of the underlying kernel code. Numbers are turned into the respective constant names, bitwise combined flags expanded to flag lists, pointers to character arrays dereferenced to provide the actual string, and more. Support for more recent kernel features may be lacking, however. The use of strace does not require any particular setup except for setting up the log filter. Tracing the application code with strace may result in significant slowdown of the application's execution. As a result, strace is not suitable for many production deployments. As an alternative, consider using SystemTap in such cases. You can limit the list of traced system calls and signals to reduce the amount of captured data. strace captures only kernel-userspace interactions and does not trace library calls, for example. Consider using ltrace for tracing library calls. ltrace The ltrace tool enables logging of an application's user space calls into shared objects (dynamic libraries). ltrace enables tracing calls to any library. You can filter the traced calls to reduce the amount of captured data. The use of ltrace does not require any particular setup except for setting up the log filter. ltrace is lightweight and fast, offering an alternative to strace : it is possible to trace the respective interfaces in libraries such as glibc with ltrace instead of tracing kernel functions with strace . Note however that ltrace may be less precise at syscall tracing. ltrace is able to decode parameters only for a limited set of library calls: the calls whose prototypes are defined in the relevant configuration files. As part of the ltrace package, prototypes for some libacl , libc , and libm calls and system calls are provided. The ltrace output mostly contains only raw numbers and pointers. The interpretation of ltrace output usually requires consulting the actual interface declarations of the libraries present in the output. SystemTap SystemTap is an instrumentation platform for probing running processes and kernel activity on the Linux system. SystemTap uses its own scripting language for programming custom event handlers. Compared to using strace and ltrace , scripting the logging means more work in the initial setup phase. However, the scripting capabilities extend SystemTap's usefulness beyond just producing logs. SystemTap works by creating and inserting a kernel module. The use of SystemTap is efficient and does not create a significant slowdown of the system or application execution on its own. SystemTap comes with a set of usage examples. GDB The GNU Debugger is primarily meant for debugging, not logging. However, some of its features make it useful even in the scenario where an application's interaction is the primary activity of interest. With GDB, it is possible to conveniently combine the capture of an interaction event with immediate debugging of the subsequent execution path. GDB is best suited for analyzing responses to infrequent or singular events, after the initial identification of problematic situations by other tools. Using GDB in any scenario with frequent events becomes inefficient or even impossible. Additional Resources Red Hat Enterprise Linux SystemTap Beginners Guide Red Hat Developer Toolset User Guide 20.3.2. Monitoring an Application's System Calls with strace The strace tool enables tracing of (and optional tampering with) interactions between an application and the Linux kernel: system calls, signal deliveries, and changes of process state. Prerequisites strace is installed on the system To install strace , run as root: Procedure Note that the tracing specification syntax of strace offers regular expressions and syscall classes to help with the identification of system calls. Run or attach to the process you wish to monitor. If the program you want to monitor is not running, start strace and specify the program : Options used in the example above are not mandatory. Use when needed: The -f option is an acronym for "follow forks". This option traces children created by the fork, vfork, and clone system calls. The -v or -e abbrev=none option disables abbreviation of output, omitting various structure fields. The -tt option is a variant of the -t option that prefixes each line with an absolute timestamp. With the -tt option, the time printed includes microseconds. The -T option prints the amount of time spent in each system call at the end of the line. The -yy option is a variant of the -y option that enables printing of paths associated with file descriptor numbers. The -yy option prints not only paths, but also protocol-specific information associated with socket file descriptors and block or character device number associated with device file descriptors. The -s option controls the maximum string size to be printed. Note that filenames are not considered strings and are always printed in full. -e trace controls the set of system calls to trace. Replace call with a comma-separated list of system calls to be displayed. If call is left, strace will display all system calls. Shorthands for some groups of system calls are provided in the strace(1) manual page. If the program is already running, find its process ID ( pid ) and attach strace to it: If you do not wish to trace any forked processes or threads, do not use the -f option. strace displays the system calls made by the application and their details. In most cases, an application and its libraries make a large number of calls and strace output appears immediately, if no filter for system calls is set. strace exits when all the traced processes exit. To terminate the monitoring before the traced program exits, press Ctrl+C . If strace started the program, it will send the terminating signal (SIGINT, in this case) to the program being started. Note, however, that program, in turn, may ignore that signal. If you attached strace to an already running program, the program terminates together with strace . Analyze the list of system calls done by the application. Problems with resource access or availability are present in the log as calls returning errors. Values passed to the system calls and patterns of call sequences provide insight into the causes of the application's behaviour. If the application crashes, the important information is probably at the end of the log. The output contains a lot of extra information. However, you can construct a more precise filter and repeat the procedure. Notes It is advantageous to both see the output and save it to a file. To do this, run the tee command: To see separate output that corresponds to different processes, run: Output for a process with the process ID ( pid ) will be stored in your_log_file.pid . Additional Resources The strace(1) manual page. Knowledgebase article - How do I use strace to trace system calls made by a command? Red Hat Developer Toolset User Guide - strace 20.3.3. Monitoring the Application's Library Function Calls with ltrace The ltrace tool enables monitoring of the calls done by an application to functions available in libraries (shared objects). Prerequisites ltrace is installed on the system Procedure Identify the libraries and functions of interest, if possible. If the program you want to monitor is not running, start ltrace and specify program : Use the options -e and -l to filter the output: Supply the function names to be displayed as function . The -e function option can be used multiple times. If left out, ltrace will display calls to all functions. Instead of specifying functions, you can specify whole libraries with the -l library option. This option behaves similarly to the -e function option. See the ltrace (1)_ manual page for more information. If the program is already running, find its process id ( pid ) and attach ltrace to it: If you do not wish to trace any forked processes or threads, leave out the -f option. ltrace displays the library calls made by the application. In most cases, an application will make a large number of calls and ltrace output will appear immediately if no filter is set. ltrace exits when the program exits. To terminate the monitoring before the traced program exits, press ctrl+C . If ltrace started the program, the program terminates together with ltrace . If you attached ltrace to an already running program, the program terminates together with ltrace . Analyze the list of library calls done by the application. If the application crashes, the important information is probably at the end of the log. The output contains a lot of unnecessary information. However, you can construct a more precise filter and repeat the procedure. Note It is advantageous to both see the output and save it to a file. Use the tee command to achieve this: Additional Resources The strace(1) manual page Red Hat Developer Toolset User Guide - ltrace 20.3.4. Monitoring the Application's System Calls with SystemTap The SystemTap tool enables registering custom event handlers for kernel events. In comparison with strace , it is harder to use, but SystemTap is more efficient and enables more complicated processing logic. Prerequisites SystemTap is installed on the system Procedure Create a file my_script.stp with the contents: Find the process ID ( pid ) of the process you wish to monitor: Run SystemTap with the script: The value of pid is the process id. The script is compiled to a kernel module which is then loaded. This introduces a slight delay between entering the command and getting the output. When the process performs a system call, the call name and its parameters are printed to the terminal. The script exits when the process terminates, or when you press Ctrl+C . Additional Resources SystemTap Beginners Guide SystemTap Tapset Reference A larger SystemTap script, which approximates strace functionality, is available as /usr/share/systemtap/examples/process/strace.stp . To run the script: # stap --example strace.stp -x pid or # stap --example strace.stp -c "cmd args ... " 20.3.5. Using GDB to Intercept Application System Calls GDB enables stopping the execution in various kinds of situations arising during the execution of a program. To stop the execution when the program performs a system call, use a GDB catchpoint . Prerequisites Understanding of GDB breakpoints GDB is attached to the program Stopping Program Execution on System Calls with GDB Set the catchpoint: The command catch syscall sets a special type of breakpoint that halts execution when a system call is performed by the program. The syscall-name option specifies the name of the call. You can specify multiple catchpoints for various system calls. Leaving out the syscall-name option causes GDB to stop on any system call. If the program has not started execution, start it: If the program execution is only halted, resume it: GDB halts the execution after any specified system call is performed by the program. Additional Resources Section 20.2.4, "Showing Program Internal Values with GDB" Section 20.2.3, "Stepping through Program Code with GDB" Debugging with GDB - 5.1.3 Setting Catchpoints 20.3.6. Using GDB to Intercept the Handling of Signals by Applications GDB enables stopping the execution in various kinds of situations arising during the execution of a program. To stop the execution when the program receives a signal from the operating system, use a GDB catchpoint . Prerequisites Understanding of GDB breakpoints GDB is attached to the program Stopping the Program Execution on Receiving a Signal with GDB Set the catchpoint: The command catch signal sets a special type of breakpoint that halts execution when a signal is received by the program. The signal-type option specifies the type of the signal. Use the special value 'all' to catch all the signals. If the program has not started execution, start it: If the program execution is only halted, resume it: GDB halts the execution after the program receives any specified signal. Additional Resources Section 20.2.4, "Showing Program Internal Values with GDB" Section 20.2.3, "Stepping through Program Code with GDB" Debugging With GDB - 5.3.1 Setting Catchpoints
[ "gcc ... -g", "man gcc", "gdb -q /bin/ls Reading symbols from /usr/bin/ls...Reading symbols from /usr/bin/ls...(no debugging symbols found)...done. (no debugging symbols found)...done. Missing separate debuginfos, use: debuginfo-install coreutils-8.22-21.el7.x86_64 (gdb)", "(gdb) q", "debuginfo-install coreutils-8.22-21.el7.x86_64", "which nautilus /usr/bin/nautilus", "locate libz | grep so /usr/lib64/libz.so /usr/lib64/libz.so.1 /usr/lib64/libz.so.1.2.7", "yum install mlocate updatedb", "yum provides /usr/lib64/libz.so.1.2.7 Loaded plugins: product-id, search-disabled-repos, subscription-manager zlib-1.2.7-17.el7.x86_64 : The compression and decompression library Repo : @anaconda/7.4 Matched from: Filename : /usr/lib64/libz.so.1.2.7", "rpm -q zlib zlib-1.2.7-17.el7.x86_64", "debuginfo-install zlib-1.2.7-17.el7.x86_64", "gdb program", "ps -C program -o pid h pid", "gdb program -p pid", "(gdb) shell ps -C program -o pid h pid", "(gdb) attach pid", "(gdb) file path/to/program", "(gdb) help info", "(gdb) br file:line", "(gdb) br line", "(gdb) br function_name", "(gdb) br file:line if condition", "(gdb) info br", "(gdb) delete number", "(gdb) clear file:line", "(gdb) watch expression", "(gdb) rwatch expression", "(gdb) awatch expression", "(gdb) info br", "(gdb) delete num", "yum install strace", "strace -fvttTyy -s 256 -e trace= call program", "ps -C program (...) strace -fvttTyy -s 256 -e trace= call -p pid", "strace ...-o |tee your_log_file.log >&2", "strace ... -ff -o your_log_file", "ltrace -f -l library -e function program", "ps -C program (...) ltrace ... -p pid", "ltrace ... |& tee your_log_file.log", "probe begin { printf(\"waiting for syscalls of process %d \\n\", target()) } probe syscall.* { if (pid() == target()) printf(\"%s(%s)\\n\", name, argstr) } probe process.end { if (pid() == target()) exit() }", "ps -aux", "stap my_script.stp -x pid", "(gdb) catch syscall syscall-name", "(gdb) r", "(gdb) c", "(gdb) catch signal signal-type", "(gdb) r", "(gdb) c" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/developer_guide/debugging-running-application
Chapter 14. Setting up a broker cluster
Chapter 14. Setting up a broker cluster A cluster consists of multiple broker instances that have been grouped together. Broker clusters enhance performance by distributing the message processing load across multiple brokers. In addition, broker clusters can minimize downtime through high availability. You can connect brokers together in many different cluster topologies. Within the cluster, each active broker manages its own messages and handles its own connections. You can also balance client connections across the cluster and redistribute messages to avoid broker starvation. 14.1. Understanding broker clusters Before creating a broker cluster, you should understand some important clustering concepts. 14.1.1. How broker clusters balance message load When brokers are connected to form a cluster, AMQ Broker automatically balances the message load between the brokers. This ensures that the cluster can maintain high message throughput. Consider a symmetric cluster of four brokers. Each broker is configured with a queue named OrderQueue . The OrderProducer client connects to Broker1 and sends messages to OrderQueue . Broker1 forwards the messages to the other brokers in round-robin fashion. The OrderConsumer clients connected to each broker consume the messages. The exact order depends on the order in which the brokers started. Figure 14.1. Message load balancing Without message load balancing, the messages sent to Broker1 would stay on Broker1 and only OrderConsumer1 would be able to consume them. While AMQ Broker automatically load balances messages by default, you can configure: the cluster to load balance messages to brokers that have a matching queue. the cluster to load balance messages to brokers that have a matching queue with active consumers. the cluster to not load balance, but to perform redistribution of messages from queues that do not have any consumers to queues that do have consumers. an address to automatically redistribute messages from queues that do not have any consumers to queues that do have consumers. Additional resources The message load balancing policy is configured with the message-load-balancing property in each broker's cluster connection. For more information, see Appendix C, Cluster Connection Configuration Elements . For more information about message redistribution, see Section 14.4.2, "Configuring message redistribution" . 14.1.2. How broker clusters improve reliability Broker clusters make high availability and failover possible, which makes them more reliable than standalone brokers. By configuring high availability, you can ensure that client applications can continue to send and receive messages even if a broker encounters a failure event. With high availability, the brokers in the cluster are grouped into live-backup groups. A live-backup group consists of a live broker that serves client requests, and one or more backup brokers that wait passively to replace the live broker if it fails. If a failure occurs, the backup brokers replaces the live broker in its live-backup group, and the clients reconnect and continue their work. 14.1.3. Understanding node IDs The broker node ID is a Globally Unique Identifier (GUID) generated programmatically when the journal for a broker instance is first created and initialized. The node ID is stored in the server.lock file. The node ID is used to uniquely identify a broker instance, regardless of whether the broker is a standalone instance, or part of a cluster. Live-backup broker pairs share the same node ID, since they share the same journal. In a broker cluster, broker instances (nodes) connect to each other and create bridges and internal "store-and-forward" queues. The names of these internal queues are based on the node IDs of the other broker instances. Broker instances also monitor cluster broadcasts for node IDs that match their own. A broker produces a warning message in the log if it identifies a duplicate ID. When you are using the replication high availability (HA) policy, a master broker that starts and has check-for-live-server set to true searches for a broker that is using its node ID. If the master broker finds another broker using the same node ID, it either does not start, or initiates failback, based on the HA configuration. The node ID is durable , meaning that it survives restarts of the broker. However, if you delete a broker instance (including its journal), then the node ID is also permanently deleted. Additional resources For more information about configuring the replication HA policy, see Configuring replication high availability . 14.1.4. Common broker cluster topologies You can connect brokers to form either a symmetric or chain cluster topology. The topology you implement depends on your environment and messaging requirements. Symmetric clusters In a symmetric cluster, every broker is connected to every other broker. This means that every broker is no more than one hop away from every other broker. Figure 14.2. Symmetric cluster Each broker in a symmetric cluster is aware of all of the queues that exist on every other broker in the cluster and the consumers that are listening on those queues. Therefore, symmetric clusters are able to load balance and redistribute messages more optimally than a chain cluster. Symmetric clusters are easier to set up than chain clusters, but they can be difficult to use in environments in which network restrictions prevent brokers from being directly connected. Chain clusters In a chain cluster, each broker in the cluster is not connected to every broker in the cluster directly. Instead, the brokers form a chain with a broker on each end of the chain and all other brokers just connecting to the and brokers in the chain. Figure 14.3. Chain cluster Chain clusters are more difficult to set up than symmetric clusters, but can be useful when brokers are on separate networks and cannot be directly connected. By using a chain cluster, an intermediary broker can indirectly connect two brokers to enable messages to flow between them even though the two brokers are not directly connected. 14.1.5. Broker discovery methods Discovery is the mechanism by which brokers in a cluster propagate their connection details to each other. AMQ Broker supports both dynamic discovery and static discovery . Dynamic discovery Each broker in the cluster broadcasts its connection settings to the other members through either UDP multicast or JGroups. In this method, each broker uses: A broadcast group to push information about its cluster connection to other potential members of the cluster. A discovery group to receive and store cluster connection information about the other brokers in the cluster. Static discovery If you are not able to use UDP or JGroups in your network, or if you want to manually specify each member of the cluster, you can use static discovery. In this method, a broker "joins" the cluster by connecting to a second broker and sending its connection details. The second broker then propagates those details to the other brokers in the cluster. 14.1.6. Cluster sizing considerations Before creating a broker cluster, consider your messaging throughput, topology, and high availability requirements. These factors affect the number of brokers to include in the cluster. Note After creating the cluster, you can adjust the size by adding and removing brokers. You can add and remove brokers without losing any messages. Messaging throughput The cluster should contain enough brokers to provide the messaging throughput that you require. The more brokers in the cluster, the greater the throughput. However, large clusters can be complex to manage. Topology You can create either symmetric clusters or chain clusters. The type of topology you choose affects the number of brokers you may need. For more information, see Section 14.1.4, "Common broker cluster topologies" . High availability If you require high availability (HA), consider choosing an HA policy before creating the cluster. The HA policy affects the size of the cluster, because each master broker should have at least one slave broker. For more information, see Section 14.3, "Implementing high availability" . 14.2. Creating a broker cluster You create a broker cluster by configuring a cluster connection on each broker that should participate in the cluster. The cluster connection defines how the broker should connect to the other brokers. You can create a broker cluster that uses static discovery or dynamic discovery (either UDP multicast or JGroups). Prerequisites You should have determined the size of the broker cluster. For more information, see Section 14.1.6, "Cluster sizing considerations" . 14.2.1. Creating a broker cluster with static discovery You can create a broker cluster by specifying a static list of brokers. Use this static discovery method if you are unable to use UDP multicast or JGroups on your network. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add the following connectors: A connector that defines how other brokers can connect to this one One or more connectors that define how this broker can connect to other brokers in the cluster <configuration> <core> ... <connectors> <connector name="netty-connector">tcp://localhost:61617</connector> 1 <connector name="broker2">tcp://localhost:61618</connector> 2 <connector name="broker3">tcp://localhost:61619</connector> </connectors> ... </core> </configuration> 1 This connector defines connection information that other brokers can use to connect to this one. This information will be sent to other brokers in the cluster during discovery. 2 The broker2 and broker3 connectors define how this broker can connect to two other brokers in the cluster, one of which will always be available. If there are other brokers in the cluster, they will be discovered by one of these connectors when the initial connection is made. For more information about connectors, see Section 2.3, "About connectors" . Add a cluster connection and configure it to use static discovery. By default, the cluster connection will load balance messages for all addresses in a symmetric topology. <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <static-connectors> <connector-ref>broker2-connector</connector-ref> <connector-ref>broker3-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections> ... </core> </configuration> cluster-connection Use the name attribute to specify the name of the cluster connection. connector-ref The connector that defines how other brokers can connect to this one. static-connectors One or more connectors that this broker can use to make an initial connection to another broker in the cluster. After making this initial connection, the broker will discover the other brokers in the cluster. You only need to configure this property if the cluster uses static discovery. Configure any additional properties for the cluster connection. These additional cluster connection properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix C, Cluster Connection Configuration Elements . Create the cluster user and password. AMQ Broker ships with default cluster credentials, but you should change them to prevent unauthorized remote clients from using these default credentials to connect to the broker. Important The cluster password must be the same on every broker in the cluster. <configuration> <core> ... <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> ... </core> </configuration> Repeat this procedure on each additional broker. You can copy the cluster configuration to each additional broker. However, do not copy any of the other AMQ Broker data files (such as the bindings, journal, and large messages directories). These files must be unique among the nodes in the cluster or the cluster will not form properly. Additional resources For an example of a broker cluster that uses static discovery, see the clustered-static-discovery AMQ Broker example program . 14.2.2. Creating a broker cluster with UDP-based dynamic discovery You can create a broker cluster in which the brokers discover each other dynamically through UDP multicast. Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add a connector. This connector defines connection information that other brokers can use to connect to this one. This information will be sent to other brokers in the cluster during discovery. <configuration> <core> ... <connectors> <connector name="netty-connector">tcp://localhost:61617</connector> </connectors> ... </core> </configuration> Add a UDP broadcast group. The broadcast group enables the broker to push information about its cluster connection to the other brokers in the cluster. This broadcast group uses UDP to broadcast the connection settings: <configuration> <core> ... <broadcast-groups> <broadcast-group name="my-broadcast-group"> <local-bind-address>172.16.9.3</local-bind-address> <local-bind-port>-1</local-bind-port> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: broadcast-group Use the name attribute to specify a unique name for the broadcast group. local-bind-address The address to which the UDP socket is bound. If you have multiple network interfaces on your broker, you should specify which one you want to use for broadcasts. If this property is not specified, the socket will be bound to an IP address chosen by the operating system. This is a UDP-specific attribute. local-bind-port The port to which the datagram socket is bound. In most cases, use the default value of -1 , which specifies an anonymous port. This parameter is used in connection with local-bind-address . This is a UDP-specific attribute. group-address The multicast address to which the data will be broadcast. It is a class D IP address in the range 224.0.0.0 - 239.255.255.255 inclusive. The address 224.0.0.0 is reserved and is not available for use. This is a UDP-specific attribute. group-port The UDP port number used for broadcasting. This is a UDP-specific attribute. broadcast-period (optional) The interval in milliseconds between consecutive broadcasts. The default value is 2000 milliseconds. connector-ref The previously configured cluster connector that should be broadcasted. Add a UDP discovery group. The discovery group defines how this broker receives connector information from other brokers. The broker maintains a list of connectors (one entry for each broker). As it receives broadcasts from a broker, it updates its entry. If it does not receive a broadcast from a broker for a length of time, it removes the entry. This discovery group uses UDP to discover the brokers in the cluster: <configuration> <core> ... <discovery-groups> <discovery-group name="my-discovery-group"> <local-bind-address>172.16.9.7</local-bind-address> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: discovery-group Use the name attribute to specify a unique name for the discovery group. local-bind-address (optional) If the machine on which the broker is running uses multiple network interfaces, you can specify the network interface to which the discovery group should listen. This is a UDP-specific attribute. group-address The multicast address of the group on which to listen. It should match the group-address in the broadcast group that you want to listen from. This is a UDP-specific attribute. group-port The UDP port number of the multicast group. It should match the group-port in the broadcast group that you want to listen from. This is a UDP-specific attribute. refresh-timeout (optional) The amount of time in milliseconds that the discovery group waits after receiving the last broadcast from a particular broker before removing that broker's connector pair entry from its list. The default is 10000 milliseconds (10 seconds). Set this to a much higher value than the broadcast-period on the broadcast group. Otherwise, brokers might periodically disappear from the list even though they are still broadcasting (due to slight differences in timing). Create a cluster connection and configure it to use dynamic discovery. By default, the cluster connection will load balance messages for all addresses in a symmetric topology. <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name="my-discovery-group"/> </cluster-connection> </cluster-connections> ... </core> </configuration> cluster-connection Use the name attribute to specify the name of the cluster connection. connector-ref The connector that defines how other brokers can connect to this one. discovery-group-ref The discovery group that this broker should use to locate other members of the cluster. You only need to configure this property if the cluster uses dynamic discovery. Configure any additional properties for the cluster connection. These additional cluster connection properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix C, Cluster Connection Configuration Elements . Create the cluster user and password. AMQ Broker ships with default cluster credentials, but you should change them to prevent unauthorized remote clients from using these default credentials to connect to the broker. Important The cluster password must be the same on every broker in the cluster. <configuration> <core> ... <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> ... </core> </configuration> Repeat this procedure on each additional broker. You can copy the cluster configuration to each additional broker. However, do not copy any of the other AMQ Broker data files (such as the bindings, journal, and large messages directories). These files must be unique among the nodes in the cluster or the cluster will not form properly. Additional resources For an example of a broker cluster configuration that uses dynamic discovery with UDP, see the clustered-queue AMQ Broker example program . 14.2.3. Creating a broker cluster with JGroups-based dynamic discovery If you are already using JGroups in your environment, you can use it to create a broker cluster in which the brokers discover each other dynamically. Prerequisites JGroups must be installed and configured. For an example of a JGroups configuration file, see the clustered-jgroups AMQ Broker example program . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add a connector. This connector defines connection information that other brokers can use to connect to this one. This information will be sent to other brokers in the cluster during discovery. <configuration> <core> ... <connectors> <connector name="netty-connector">tcp://localhost:61617</connector> </connectors> ... </core> </configuration> Within the <core> element, add a JGroups broadcast group. The broadcast group enables the broker to push information about its cluster connection to the other brokers in the cluster. This broadcast group uses JGroups to broadcast the connection settings: <configuration> <core> ... <broadcast-groups> <broadcast-group name="my-broadcast-group"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: broadcast-group Use the name attribute to specify a unique name for the broadcast group. jgroups-file The name of JGroups configuration file to initialize JGroups channels. The file must be in the Java resource path so that the broker can load it. jgroups-channel The name of the JGroups channel to connect to for broadcasting. broadcast-period (optional) The interval, in milliseconds, between consecutive broadcasts. The default value is 2000 milliseconds. connector-ref The previously configured cluster connector that should be broadcasted. Add a JGroups discovery group. The discovery group defines how connector information is received. The broker maintains a list of connectors (one entry for each broker). As it receives broadcasts from a broker, it updates its entry. If it does not receive a broadcast from a broker for a length of time, it removes the entry. This discovery group uses JGroups to discover the brokers in the cluster: <configuration> <core> ... <discovery-groups> <discovery-group name="my-discovery-group"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> ... </core> </configuration> The following parameters are required unless otherwise noted: discovery-group Use the name attribute to specify a unique name for the discovery group. jgroups-file The name of JGroups configuration file to initialize JGroups channels. The file must be in the Java resource path so that the broker can load it. jgroups-channel The name of the JGroups channel to connect to for receiving broadcasts. refresh-timeout (optional) The amount of time in milliseconds that the discovery group waits after receiving the last broadcast from a particular broker before removing that broker's connector pair entry from its list. The default is 10000 milliseconds (10 seconds). Set this to a much higher value than the broadcast-period on the broadcast group. Otherwise, brokers might periodically disappear from the list even though they are still broadcasting (due to slight differences in timing). Create a cluster connection and configure it to use dynamic discovery. By default, the cluster connection will load balance messages for all addresses in a symmetric topology. <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name="my-discovery-group"/> </cluster-connection> </cluster-connections> ... </core> </configuration> cluster-connection Use the name attribute to specify the name of the cluster connection. connector-ref The connector that defines how other brokers can connect to this one. discovery-group-ref The discovery group that this broker should use to locate other members of the cluster. You only need to configure this property if the cluster uses dynamic discovery. Configure any additional properties for the cluster connection. These additional cluster connection properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix C, Cluster Connection Configuration Elements . Create the cluster user and password. AMQ Broker ships with default cluster credentials, but you should change them to prevent unauthorized remote clients from using these default credentials to connect to the broker. Important The cluster password must be the same on every broker in the cluster. <configuration> <core> ... <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> ... </core> </configuration> Repeat this procedure on each additional broker. You can copy the cluster configuration to each additional broker. However, do not copy any of the other AMQ Broker data files (such as the bindings, journal, and large messages directories). These files must be unique among the nodes in the cluster or the cluster will not form properly. Additional resources For an example of a broker cluster that uses dynamic discovery with JGroups, see the clustered-jgroups AMQ Broker example program . 14.3. Implementing high availability You can improve its reliability by implementing high availability (HA), enabling the broker cluster continue to function even if one or more brokers go offline. Implementing HA involves several steps: Configure a broker cluster for your HA implementation as described in Section 14.2, "Creating a broker cluster" . You should understand what live-backup groups are, and choose an HA policy that best meets your requirements. See Understanding how HA works in AMQ Broker . When you have chosen a suitable HA policy, configure the HA policy on each broker in the cluster. See: Configuring shared store high availability Configuring replication high availability Configuring limited high availability with live-only Configuring high availability with colocated backups Configure your client applications to use failover . Note In the later event that you need to troubleshoot a broker cluster configured for high availability, it is recommended that you enable Garbage Collection (GC) logging for each Java Virtual Machine (JVM) instance that is running a broker in the cluster. To learn how to enable GC logs on your JVM, consult the official documentation for the Java Development Kit (JDK) version used by your JVM. For more information on the JVM versions that AMQ Broker supports, see Red Hat AMQ 7 Supported Configurations . 14.3.1. Understanding high availability In AMQ Broker, you implement high availability (HA) by grouping the brokers in the cluster into live-backup groups . In a live-backup group, a live broker is linked to a backup broker, which can take over for the live broker if it fails. AMQ Broker also provides several different strategies for failover (called HA policies ) within a live-backup group. 14.3.1.1. How live-backup groups provide high availability In AMQ Broker, you implement high availability (HA) by linking together the brokers in your cluster to form live-backup groups . Live-backup groups provide failover , which means that if one broker fails, another broker can take over its message processing. A live-backup group consists of one live broker (sometimes called the master broker) linked to one or more backup brokers (sometimes called slave brokers). The live broker serves client requests, while the backup brokers wait in passive mode. If the live broker fails, a backup broker replaces the live broker, enabling the clients to reconnect and continue their work. 14.3.1.2. High availability policies A high availability (HA) policy defines how failover happens in a live-backup group. AMQ Broker provides several different HA policies: Shared store (recommended) The live and backup brokers store their messaging data in a common directory on a shared file system; typically a Storage Area Network (SAN) or Network File System (NFS) server. You can also store broker data in a specified database if you have configured JDBC-based persistence. With shared store, if a live broker fails, the backup broker loads the message data from the shared store and takes over for the failed live broker. In most cases, you should use shared store instead of replication. Because shared store does not replicate data over the network, it typically provides better performance than replication. Shared store also avoids network isolation (also called "split brain") issues in which a live broker and its backup become live at the same time. Replication The live and backup brokers continuously synchronize their messaging data over the network. If the live broker fails, the backup broker loads the synchronized data and takes over for the failed live broker. Data synchronization between the live and backup brokers ensures that no messaging data is lost if the live broker fails. When the live and backup brokers initially join together, the live broker replicates all of its existing data to the backup broker over the network. Once this initial phase is complete, the live broker replicates persistent data to the backup broker as the live broker receives it. This means that if the live broker drops off the network, the backup broker has all of the persistent data that the live broker has received up to that point. Because replication synchronizes data over the network, network failures can result in network isolation in which a live broker and its backup become live at the same time. Live-only (limited HA) When a live broker is stopped gracefully, it copies its messages and transaction state to another live broker and then shuts down. Clients can then reconnect to the other broker to continue sending and receiving messages. Additional resources For more information about the persistent message data that is shared between brokers in a live-backup group, see Section 6.1, "Persisting message data in journals" . 14.3.1.3. Replication policy limitations Network isolation (sometimes called "split brain") is a limitation of the replication high availability (HA) policy. You should understand how it occurs, and how to avoid it. Network isolation can happen if a live broker and its backup lose their connection. In this situation, both a live broker and its backup can become active at the same time. Specifically, if the backup broker can still connect to more than half of the live brokers in the cluster, it also becomes active. Because there is no message replication between the brokers in this situation, they each serve clients and process messages without the other knowing it. In this case, each broker has a completely different journal. Recovering from this situation can be very difficult and in some cases, not possible. To avoid network isolation, consider the following: To eliminate any possibility of network isolation, use the shared store HA policy. If you do use the replication HA policy, you can reduce (but not eliminate) the chance of encountering network isolation by using at least three live-backup pairs . Using at least three live-backup pairs ensures that a majority result can be achieved in any quorum vote that takes place when a live-backup broker pair experiences a replication interruption. Some additional considerations when you use the replication HA policy are described below: When a live broker fails and the backup transitions to live, no further replication takes place until a new backup broker is attached to the live, or failback to the original live broker occurs. If the backup broker in a live-backup group fails, the live broker continues to serve messages. However, messages are not replicated until another broker is added as a backup, or the original backup broker is restarted. During that time, messages are persisted only to the live broker. Suppose that both brokers in a live-backup pair were previously shut down, but are now available to be restarted. In this case, to avoid message loss, you need to restart the most recently active broker first. If the most recently active broker was the backup broker, you need to manually reconfigure this broker as a master broker to enable it to be restarted first. 14.3.2. Configuring shared store high availability You can use the shared store high availability (HA) policy to implement HA in a broker cluster. With shared store, both live and backup brokers access a common directory on a shared file system; typically a Storage Area Network (SAN) or Network File System (NFS) server. You can also store broker data in a specified database if you have configured JDBC-based persistence. With shared store, if a live broker fails, the backup broker loads the message data from the shared store and takes over for the failed live broker. In general, a SAN offers better performance (for example, speed) versus an NFS server, and is the recommended option, if available. If you need to use an NFS server, see Red Hat AMQ 7 Supported Configurations for more information about network file systems that AMQ Broker supports. In most cases, you should use shared store HA instead of replication. Because shared store does not replicate data over the network, it typically provides better performance than replication. Shared store also avoids network isolation (also called "split brain") issues in which a live broker and its backup become live at the same time. Note When using shared store, the startup time for the backup broker depends on the size of the message journal. When the backup broker takes over for a failed live broker, it loads the journal from the shared store. This process can be time consuming if the journal contains a lot of data. 14.3.2.1. Configuring an NFS shared store When using shared store high availability, you must configure both the live and backup brokers to use a common directory on a shared file system. Typically, you use a Storage Area Network (SAN) or Network File System (NFS) server. Listed below are some recommended configuration options when mounting an exported directory from an NFS server on each of your broker machine instances. sync Specifies that all changes are immediately flushed to disk. intr Allows NFS requests to be interrupted if the server is shut down or cannot be reached. noac Disables attribute caching. This behavior is needed to achieve attribute cache coherence among multiple clients. soft Specifies that if the NFS server is unavailable, the error should be reported rather than waiting for the server to come back online. lookupcache=none Disables lookup caching. timeo=n The time, in deciseconds (tenths of a second), that the NFS client (that is, the broker) waits for a response from the NFS server before it retries a request. For NFS over TCP, the default timeo value is 600 (60 seconds). For NFS over UDP, the client uses an adaptive algorithm to estimate an appropriate timeout value for frequently used request types, such as read and write requests. retrans=n The number of times that the NFS client retries a request before it attempts further recovery action. If the retrans option is not specified, the NFS client tries each request three times. Important It is important to use reasonable values when you configure the timeo and retrans options. A default timeo wait time of 600 deciseconds (60 seconds) combined with a retrans value of 5 retries can result in a five-minute wait for AMQ Broker to detect an NFS disconnection. Additional resources To learn how to mount an exported directory from an NFS server, see Mounting an NFS share with mount in the Red Hat Enterprise Linux documentation. For information about network file systems supported by AMQ Broker, see Red Hat AMQ 7 Supported Configurations . 14.3.2.2. Configuring shared store high availability This procedure shows how to configure shared store high availability for a broker cluster. Prerequisites A shared storage system must be accessible to the live and backup brokers. Typically, you use a Storage Area Network (SAN) or Network File System (NFS) server to provide the shared store. For more information about supported network file systems, see Red Hat AMQ 7 Supported Configurations . If you have configured JDBC-based persistence, you can use your specified database to provide the shared store. To learn how to configure JDBC persistence, see Section 6.2, "Persisting message data in a database" . Procedure Group the brokers in your cluster into live-backup groups. In most cases, a live-backup group should consist of two brokers: a live broker and a backup broker. If you have six brokers in your cluster, you would need three live-backup groups. Create the first live-backup group consisting of one live broker and one backup broker. Open the live broker's <broker_instance_dir> /etc/broker.xml configuration file. If you are using: A network file system to provide the shared store, verify that the live broker's paging, bindings, journal, and large messages directories point to a shared location that the backup broker can also access. <configuration> <core> ... <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> ... </core> </configuration> A database to provide the shared store, ensure that both the master and backup broker can connect to the same database and have the same configuration specified in the database-store element of the broker.xml configuration file. An example configuration is shown below. <configuration> <core> <store> <database-store> <jdbc-connection-url>jdbc:oracle:data/oracle/database-store;create=true</jdbc-connection-url> <jdbc-user>ENC(5493dd76567ee5ec269d11823973462f)</jdbc-user> <jdbc-password>ENC(56a0db3b71043054269d11823973462f)</jdbc-password> <bindings-table-name>BINDINGS_TABLE</bindings-table-name> <message-table-name>MESSAGE_TABLE</message-table-name> <large-message-table-name>LARGE_MESSAGES_TABLE</large-message-table-name> <page-store-table-name>PAGE_STORE_TABLE</page-store-table-name> <node-manager-store-table-name>NODE_MANAGER_TABLE<node-manager-store-table-name> <jdbc-driver-class-name>oracle.jdbc.driver.OracleDriver</jdbc-driver-class-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>15000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store> </core> </configuration> Configure the live broker to use shared store for its HA policy. <configuration> <core> ... <ha-policy> <shared-store> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> </shared-store> </ha-policy> ... </core> </configuration> failover-on-shutdown If this broker is stopped normally, this property controls whether the backup broker should become live and take over. Open the backup broker's <broker_instance_dir> /etc/broker.xml configuration file. If you are using: A network file system to provide the shared store, verify that the backup broker's paging, bindings, journal, and large messages directories point to the same shared location as the live broker. <configuration> <core> ... <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> ... </core> </configuration> A database to provide the shared store, ensure that both the master and backup brokers can connect to the same database and have the same configuration specified in the database-store element of the broker.xml configuration file. Configure the backup broker to use shared store for its HA policy. <configuration> <core> ... <ha-policy> <shared-store> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </shared-store> </ha-policy> ... </core> </configuration> failover-on-shutdown If this broker has become live and then is stopped normally, this property controls whether the backup broker (the original live broker) should become live and take over. allow-failback If failover has occurred and the backup broker has taken over for the live broker, this property controls whether the backup broker should fail back to the original live broker when it restarts and reconnects to the cluster. Note Failback is intended for a live-backup pair (one live broker paired with a single backup broker). If the live broker is configured with multiple backups, then failback will not occur. Instead, if a failover event occurs, the backup broker will become live, and the backup will become its backup. When the original live broker comes back online, it will not be able to initiate failback, because the broker that is now live already has a backup. restart-backup This property controls whether the backup broker automatically restarts after it fails back to the live broker. The default value of this property is true . Repeat Step 2 for each remaining live-backup group in the cluster. 14.3.3. Configuring replication high availability You can use the replication high availability (HA) policy to implement HA in a broker cluster. With replication, persistent data is synchronized between the live and backup brokers. If a live broker encounters a failure, message data is synchronized to the backup broker and it takes over for the failed live broker. You should use replication as an alternative to shared store, if you do not have a shared file system. However, replication can result in network isolation in which a live broker and its backup become live at the same time. Replication requires at least three live-backup pairs to lessen (but not eliminate) the risk of network isolation. Using at least three live-backup broker pairs enables your cluster to use quorum voting to avoid having two live brokers. The sections that follow explain how quorum voting works and how to configure replication HA for a broker cluster with at least three live-backup pairs. Note Because the live and backup brokers must synchronize their messaging data over the network, replication adds a performance overhead. This synchronization process blocks journal operations, but it does not block clients. You can configure the maximum amount of time that journal operations can be blocked for data synchronization. 14.3.3.1. About quorum voting In the event that a live broker and its backup experience an interrupted replication connection, you can configure a process called quorum voting to mitigate against network isolation (or "split brain") issues. During network isolation, a live broker and its backup can become active at the same time. The following table describes the two types of quorum voting that AMQ Broker uses. Vote type Description Initiator Required configuration Participants Action based on vote result Backup vote If a backup broker loses its replication connection to the live broker, the backup broker decides whether or not to start based on the result of this vote. Backup broker None. A backup vote happens automatically when a backup broker loses connection to its replication partner. However, you can control the properties of a backup vote by specifying custom values for these parameters: quorum-vote-wait vote-retries vote-retry-wait Other live brokers in the cluster The backup broker starts if it receives a majority (that is, a quorum ) vote from the other live brokers in the cluster, indicating that its replication partner is no longer available. Live vote If a live broker loses connection to its replication partner, the live broker decides whether to continue running based on this vote. Live broker A live vote happens when a live broker loses connection to its replication partner and vote-on-replication-failure is set to true . A backup broker that has become active is considered a live broker, and can initiate a live vote. Other live brokers in the cluster The live broker shuts down if it doesn't receive a majority vote from the other live brokers in the cluster, indicating that its cluster connection is still active. Important Listed below are some important things to note about how the configuration of your broker cluster affects the behavior of quorum voting. For a quorum vote to succeed, the size of your cluster must allow a majority result to be achieved. Therefore, when you use the replication HA policy, your cluster should have at least three live-backup broker pairs. The more live-backup broker pairs that you add to your cluster, the more you increase the overall fault tolerance of the cluster. For example, suppose you have three live-backup pairs. If you lose a complete live-backup pair, the two remaining live-backup pairs cannot achieve a majority result in any subsequent quorum vote. This situation means that any further replication interruption in the cluster might cause a live broker to shut down, and prevent its backup broker from starting up. By configuring your cluster with, say, five broker pairs, the cluster can experience at least two failures, while still ensuring a majority result from any quorum vote. If you intentionally reduce the number of live-backup broker pairs in your cluster, the previously established threshold for a majority vote does not automatically decrease. During this time, any quorum vote triggered by a lost replication connection cannot succeed, making your cluster more vulnerable to network isolation. To make your cluster recalculate the majority threshold for a quorum vote, first shut down the live-backup pairs that you are removing from your cluster. Then, restart the remaining live-backup pairs in the cluster. When all of the remaining brokers have been restarted, the cluster recalculates the quorum vote threshold. 14.3.3.2. Configuring a broker cluster for replication high availability The following procedure describes how to configure replication high-availability (HA) for a six-broker cluster. In this topology, the six brokers are grouped into three live-backup pairs: each of the three live brokers is paired with a dedicated backup broker. Replication requires at least three live-backup pairs to lessen (but not eliminate) the risk of network isolation. Prerequisites You must have a broker cluster with at least six brokers. The six brokers are configured into three live-backup pairs. For more information about adding brokers to a cluster, see Chapter 14, Setting up a broker cluster . Procedure Group the brokers in your cluster into live-backup groups. In most cases, a live-backup group should consist of two brokers: a live broker and a backup broker. If you have six brokers in your cluster, you need three live-backup groups. Create the first live-backup group consisting of one live broker and one backup broker. Open the live broker's <broker_instance_dir> /etc/broker.xml configuration file. Configure the live broker to use replication for its HA policy. <configuration> <core> ... <ha-policy> <replication> <master> <check-for-live-server>true</check-for-live-server> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> ... </master> </replication> </ha-policy> ... </core> </configuration> check-for-live-server If the live broker fails, this property controls whether clients should fail back to it when it restarts. If you set this property to true , when the live broker restarts after a failover, it searches for another broker in the cluster with the same node ID. If the live broker finds another broker with the same node ID, this indicates that a backup broker successfully started upon failure of the live broker. In this case, the live broker synchronizes its data with the backup broker. The live broker then requests the backup broker to shut down. If the backup broker is configured for failback, as shown below, it shuts down. The live broker then resumes its active role, and clients reconnect to it. Warning If you do not set check-for-live-server to true on the live broker, you might experience duplicate messaging handling when you restart the live broker after a failover. Specifically, if you restart a live broker with this property set to false , the live broker does not synchronize data with its backup broker. In this case, the live broker might process the same messages that the backup broker has already handled, causing duplicates. group-name A name for this live-backup group. To form a live-backup group, the live and backup brokers must be configured with the same group name. vote-on-replication-failure This property controls whether a live broker initiates a quorum vote called a live vote in the event of an interrupted replication connection. A live vote is a way for a live broker to determine whether it or its partner is the cause of the interrupted replication connection. Based on the result of the vote, the live broker either stays running or shuts down. Important For a quorum vote to succeed, the size of your cluster must allow a majority result to be achieved. Therefore, when you use the replication HA policy, your cluster should have at least three live-backup broker pairs. The more broker pairs you configure in your cluster, the more you increase the overall fault tolerance of the cluster. For example, suppose you have three live-backup broker pairs. If you lose connection to a complete live-backup pair, the two remaining live-backup pairs can no longer achieve a majority result in a quorum vote. This situation means that any subsequent replication interruption might cause a live broker to shut down, and prevent its backup broker from starting up. By configuring your cluster with, say, five broker pairs, the cluster can experience at least two failures, while still ensuring a majority result from any quorum vote. Configure any additional HA properties for the live broker. These additional HA properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix F, Replication High Availability Configuration Elements . Open the backup broker's <broker_instance_dir> /etc/broker.xml configuration file. Configure the backup (that is, slave) broker to use replication for its HA policy. <configuration> <core> ... <ha-policy> <replication> <slave> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> ... </slave> </replication> </ha-policy> ... </core> </configuration> allow-failback If failover has occurred and the backup broker has taken over for the live broker, this property controls whether the backup broker should fail back to the original live broker when it restarts and reconnects to the cluster. Note Failback is intended for a live-backup pair (one live broker paired with a single backup broker). If the live broker is configured with multiple backups, then failback will not occur. Instead, if a failover event occurs, the backup broker will become live, and the backup will become its backup. When the original live broker comes back online, it will not be able to initiate failback, because the broker that is now live already has a backup. restart-backup This property controls whether the backup broker automatically restarts after it fails back to the live broker. The default value of this property is true . group-name The group name of the live broker to which this backup should connect. A backup broker connects only to a live broker that shares the same group name. vote-on-replication-failure This property controls whether a live broker initiates a quorum vote called a live vote in the event of an interrupted replication connection. A backup broker that has become active is considered a live broker and can initiate a live vote. A live vote is a way for a live broker to determine whether it or its partner is the cause of the interrupted replication connection. Based on the result of the vote, the live broker either stays running or shuts down. (Optional) Configure properties of the quorum votes that the backup broker initiates. <configuration> <core> ... <ha-policy> <replication> <slave> ... <vote-retries>12</vote-retries> <vote-retry-wait>5000</vote-retry-wait> ... </slave> </replication> </ha-policy> ... </core> </configuration> vote-retries This property controls how many times the backup broker retries the quorum vote in order to receive a majority result that allows the backup broker to start up. vote-retry-wait This property controls how long, in milliseconds, that the backup broker waits between each retry of the quorum vote. Configure any additional HA properties for the backup broker. These additional HA properties have default values that are suitable for most common use cases. Therefore, you only need to configure these properties if you do not want the default behavior. For more information, see Appendix F, Replication High Availability Configuration Elements . Repeat step 2 for each additional live-backup group in the cluster. If there are six brokers in the cluster, repeat this procedure two more times; once for each remaining live-backup group. Additional resources For examples of broker clusters that use replication for HA, see the HA example programs . For more information about node IDs, see Understanding node IDs . 14.3.4. Configuring limited high availability with live-only The live-only HA policy enables you to shut down a broker in a cluster without losing any messages. With live-only, when a live broker is stopped gracefully, it copies its messages and transaction state to another live broker and then shuts down. Clients can then reconnect to the other broker to continue sending and receiving messages. The live-only HA policy only handles cases when the broker is stopped gracefully. It does not handle unexpected broker failures. While live-only HA prevents message loss, it may not preserve message order. If a broker configured with live-only HA is stopped, its messages will be appended to the ends of the queues of another broker. Note When a broker is preparing to scale down, it sends a message to its clients before they are disconnected informing them which new broker is ready to process their messages. However, clients should reconnect to the new broker only after their initial broker has finished scaling down. This ensures that any state, such as queues or transactions, is available on the other broker when the client reconnects. The normal reconnect settings apply when the client is reconnecting, so you should set these high enough to deal with the time needed to scale down. This procedure describes how to configure each broker in the cluster to scale down. After completing this procedure, whenever a broker is stopped gracefully, it will copy its messages and transaction state to another broker in the cluster. Procedure Open the first broker's <broker_instance_dir> /etc/broker.xml configuration file. Configure the broker to use the live-only HA policy. <configuration> <core> ... <ha-policy> <live-only> </live-only> </ha-policy> ... </core> </configuration> Configure a method for scaling down the broker cluster. Specify the broker or group of brokers to which this broker should scale down. To scale down to... Do this... A specific broker in the cluster Specify the connector of the broker to which you want to scale down. <live-only> <scale-down> <connectors> <connector-ref>broker1-connector</connector-ref> </connectors> </scale-down> </live-only> Any broker in the cluster Specify the broker cluster's discovery group. <live-only> <scale-down> <discovery-group-ref discovery-group-name="my-discovery-group"/> </scale-down> </live-only> A broker in a particular broker group Specify a broker group. <live-only> <scale-down> <group-name>my-group-name</group-name> </scale-down> </live-only> Repeat this procedure for each remaining broker in the cluster. Additional resources For an example of a broker cluster that uses live-only to scale down the cluster, see the scale-down example programs . 14.3.5. Configuring high availability with colocated backups Rather than configure live-backup groups, you can colocate backup brokers in the same JVM as another live broker. In this configuration, each live broker is configured to request another live broker to create and start a backup broker in its JVM. Figure 14.4. Colocated live and backup brokers You can use colocation with either shared store or replication as the high availability (HA) policy. The new backup broker inherits its configuration from the live broker that creates it. The name of the backup is set to colocated_backup_n where n is the number of backups the live broker has created. In addition, the backup broker inherits the configuration for its connectors and acceptors from the live broker that creates it. By default, port offset of 100 is applied to each. For example, if the live broker has an acceptor for port 61616, the first backup broker created will use port 61716, the second backup will use 61816, and so on. Directories for the journal, large messages, and paging are set according to the HA policy you choose. If you choose shared store, the requesting broker notifies the target broker which directories to use. If replication is chosen, directories are inherited from the creating broker and have the new backup's name appended to them. This procedure configures each broker in the cluster to use shared store HA, and to request a backup to be created and colocated with another broker in the cluster. Procedure Open the first broker's <broker_instance_dir> /etc/broker.xml configuration file. Configure the broker to use an HA policy and colocation. In this example, the broker is configured with shared store HA and colocation. <configuration> <core> ... <ha-policy> <shared-store> <colocated> <request-backup>true</request-backup> <max-backups>1</max-backups> <backup-request-retries>-1</backup-request-retries> <backup-request-retry-interval>5000</backup-request-retry-interval/> <backup-port-offset>150</backup-port-offset> <excludes> <connector-ref>remote-connector</connector-ref> </excludes> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </colocated> </shared-store> </ha-policy> ... </core> </configuration> request-backup By setting this property to true , this broker will request a backup broker to be created by another live broker in the cluster. max-backups The number of backup brokers that this broker can create. If you set this property to 0 , this broker will not accept backup requests from other brokers in the cluster. backup-request-retries The number of times this broker should try to request a backup broker to be created. The default is -1 , which means unlimited tries. backup-request-retry-interval The amount of time in milliseconds that the broker should wait before retrying a request to create a backup broker. The default is 5000 , or 5 seconds. backup-port-offset The port offset to use for the acceptors and connectors for a new backup broker. If this broker receives a request to create a backup for another broker in the cluster, it will create the backup broker with the ports offset by this amount. The default is 100 . excludes (optional) Excludes connectors from the backup port offset. If you have configured any connectors for external brokers that should be excluded from the backup port offset, add a <connector-ref> for each of the connectors. master The shared store or replication failover configuration for this broker. slave The shared store or replication failover configuration for this broker's backup. Repeat this procedure for each remaining broker in the cluster. Additional resources For examples of broker clusters that use colocated backups, see the HA example programs . 14.3.6. Configuring clients to fail over After configuring high availability in a broker cluster, you configure your clients to fail over. Client failover ensures that if a broker fails, the clients connected to it can reconnect to another broker in the cluster with minimal downtime. Note In the event of transient network problems, AMQ Broker automatically reattaches connections to the same broker. This is similar to failover, except that the client reconnects to the same broker. You can configure two different types of client failover: Automatic client failover The client receives information about the broker cluster when it first connects. If the broker to which it is connected fails, the client automatically reconnects to the broker's backup, and the backup broker re-creates any sessions and consumers that existed on each connection before failover. Application-level client failover As an alternative to automatic client failover, you can instead code your client applications with your own custom reconnection logic in a failure handler. Procedure Use AMQ Core Protocol JMS to configure your client application with automatic or application-level failover. For more information, see Using the AMQ Core Protocol JMS Client . 14.4. Enabling message redistribution If your broker cluster is configured with message-load-balancing set to ON_DEMAND or OFF_WITH_REDISTRIBUTION , you can configure message redistribution to prevent messages from being "stuck" in a queue that does not have a consumer to consume the messages. This section contains information about: Understanding message distribution Configuring message redistribution 14.4.1. Understanding message redistribution Broker clusters use load balancing to distribute the message load across the cluster. When configuring load balancing in the cluster connection, you can enable redistribution using the following message-load-balancing settings: ON_DEMAND - enable load balancing and allow redistribution OFF_WITH_REDISTRIBUTION - disable load balancing but allow redistribution In both cases, the broker forwards messages only to other brokers that have matching consumers. This behavior ensures that messages are not moved to queues that do not have any consumers to consume the messages. However, if the consumers attached to a queue close after the messages are forwarded to the broker, those messages become "stuck" in the queue and are not consumed. This issue is sometimes called starvation . Message redistribution prevents starvation by automatically redistributing the messages from queues that have no consumers to brokers in the cluster that do have matching consumers. With OFF_WITH_REDISTRIBUTION , the broker only forwards messages to other brokers that have matching consumers if there are no active local consumers, enabling you to prioritize a broker while providing an alternative when consumers are not available. Message redistribution supports the use of filters (also know as selectors ), that is, messages are redistributed when they do not match the selectors of the available local consumers. Additional resources For more information about cluster load balancing, see Section 14.1.1, "How broker clusters balance message load" . 14.4.2. Configuring message redistribution This procedure shows how to configure message redistribution with load balancing. If you want message redistribution without load balancing, set <message-load-balancing> is set to OFF_WITH_REDISTRIBUTION . Procedure Open the <broker_instance_dir> /etc/broker.xml configuration file. In the <cluster-connection> element, verify that <message-load-balancing> is set to ON_DEMAND . <configuration> <core> ... <cluster-connections> <cluster-connection name="my-cluster"> ... <message-load-balancing>ON_DEMAND</message-load-balancing> ... </cluster-connection> </cluster-connections> </core> </configuration> Within the <address-settings> element, set the redistribution delay for a queue or set of queues. In this example, messages load balanced to my.queue will be redistributed 5000 milliseconds after the last consumer closes. <configuration> <core> ... <address-settings> <address-setting match="my.queue"> <redistribution-delay>5000</redistribution-delay> </address-setting> </address-settings> ... </core> </configuration> address-setting Set the match attribute to be the name of the queue for which you want messages to be redistributed. You can use the broker wildcard syntax to specify a range of queues. For more information, see Section 4.2, "Applying address settings to sets of addresses" . redistribution-delay The amount of time (in milliseconds) that the broker should wait after this queue's final consumer closes before redistributing messages to other brokers in the cluster. If you set this to 0 , messages will be redistributed immediately. However, you should typically set a delay before redistributing - it is common for a consumer to close but another one to be quickly created on the same queue. Repeat this procedure for each additional broker in the cluster. Additional resources For an example of a broker cluster configuration that redistributes messages, see the queue-message-redistribution AMQ Broker example program . 14.5. Configuring clustered message grouping Message grouping enables clients to send groups of messages of a particular type to be processed serially by the same consumer. By adding a grouping handler to each broker in the cluster, you ensure that clients can send grouped messages to any broker in the cluster and still have those messages consumed in the correct order by the same consumer. Note Clustering provides parallelism, enabling you to scale horizontally, whereas grouping provides a serialization technique to direct grouped messages to specific consumers. Red Hat recommends that you use either clustering or grouping, and avoid using clustering and grouping together. There are two types of grouping handlers: local handlers and remote handlers . They enable the broker cluster to route all of the messages in a particular group to the appropriate queue so that the intended consumer can consume them in the correct order. Prerequisites There should be at least one consumer on each broker in the cluster. When a message is pinned to a consumer on a queue, all messages with the same group ID will be routed to that queue. If the consumer is removed, the queue will continue to receive the messages even if there are no consumers. Procedure Configure a local handler on one broker in the cluster. If you are using high availability, this should be a master broker. Open the broker's <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add a local handler: The local handler serves as an arbiter for the remote handlers. It stores route information and communicates it to the other brokers. <configuration> <core> ... <grouping-handler name="my-grouping-handler"> <type>LOCAL</type> <timeout>10000</timeout> </grouping-handler> ... </core> </configuration> grouping-handler Use the name attribute to specify a unique name for the grouping handler. type Set this to LOCAL . timeout The amount of time to wait (in milliseconds) for a decision to be made about where to route the message. The default is 5000 milliseconds. If the timeout is reached before a routing decision is made, an exception is thrown, which ensures strict message ordering. When the broker receives a message with a group ID, it proposes a route to a queue to which the consumer is attached. If the route is accepted by the grouping handlers on the other brokers in the cluster, then the route is established: all brokers in the cluster will forward messages with this group ID to that queue. If the broker's route proposal is rejected, then it proposes an alternate route, repeating the process until a route is accepted. If you are using high availability, copy the local handler configuration to the master broker's slave broker. Copying the local handler configuration to the slave broker prevents a single point of failure for the local handler. On each remaining broker in the cluster, configure a remote handler. Open the broker's <broker_instance_dir> /etc/broker.xml configuration file. Within the <core> element, add a remote handler: <configuration> <core> ... <grouping-handler name="my-grouping-handler"> <type>REMOTE</type> <timeout>5000</timeout> </grouping-handler> ... </core> </configuration> grouping-handler Use the name attribute to specify a unique name for the grouping handler. type Set this to REMOTE . timeout The amount of time to wait (in milliseconds) for a decision to be made about where to route the message. The default is 5000 milliseconds. Set this value to at least half of the value of the local handler. Additional resources For an example of a broker cluster configured for message grouping, see the clustered-grouping AMQ Broker example program . 14.6. Connecting clients to a broker cluster You can use the AMQ JMS clients to connect to the cluster. By using JMS, you can configure your messaging clients to discover the list of brokers dynamically or statically. You can also configure client-side load balancing to distribute the client sessions created from the connection across the cluster. Procedure Use AMQ Core Protocol JMS to configure your client application to connect to the broker cluster. For more information, see Using the AMQ Core Protocol JMS Client .
[ "<configuration> <core> <connectors> <connector name=\"netty-connector\">tcp://localhost:61617</connector> 1 <connector name=\"broker2\">tcp://localhost:61618</connector> 2 <connector name=\"broker3\">tcp://localhost:61619</connector> </connectors> </core> </configuration>", "<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <static-connectors> <connector-ref>broker2-connector</connector-ref> <connector-ref>broker3-connector</connector-ref> </static-connectors> </cluster-connection> </cluster-connections> </core> </configuration>", "<configuration> <core> <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> </core> </configuration>", "<configuration> <core> <connectors> <connector name=\"netty-connector\">tcp://localhost:61617</connector> </connectors> </core> </configuration>", "<configuration> <core> <broadcast-groups> <broadcast-group name=\"my-broadcast-group\"> <local-bind-address>172.16.9.3</local-bind-address> <local-bind-port>-1</local-bind-port> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> </core> </configuration>", "<configuration> <core> <discovery-groups> <discovery-group name=\"my-discovery-group\"> <local-bind-address>172.16.9.7</local-bind-address> <group-address>231.7.7.7</group-address> <group-port>9876</group-port> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> </core> </configuration>", "<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name=\"my-discovery-group\"/> </cluster-connection> </cluster-connections> </core> </configuration>", "<configuration> <core> <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> </core> </configuration>", "<configuration> <core> <connectors> <connector name=\"netty-connector\">tcp://localhost:61617</connector> </connectors> </core> </configuration>", "<configuration> <core> <broadcast-groups> <broadcast-group name=\"my-broadcast-group\"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <broadcast-period>2000</broadcast-period> <connector-ref>netty-connector</connector-ref> </broadcast-group> </broadcast-groups> </core> </configuration>", "<configuration> <core> <discovery-groups> <discovery-group name=\"my-discovery-group\"> <jgroups-file>test-jgroups-file_ping.xml</jgroups-file> <jgroups-channel>activemq_broadcast_channel</jgroups-channel> <refresh-timeout>10000</refresh-timeout> </discovery-group> <discovery-groups> </core> </configuration>", "<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <connector-ref>netty-connector</connector-ref> <discovery-group-ref discovery-group-name=\"my-discovery-group\"/> </cluster-connection> </cluster-connections> </core> </configuration>", "<configuration> <core> <cluster-user>cluster_user</cluster-user> <cluster-password>cluster_user_password</cluster-password> </core> </configuration>", "<configuration> <core> <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> </core> </configuration>", "<configuration> <core> <store> <database-store> <jdbc-connection-url>jdbc:oracle:data/oracle/database-store;create=true</jdbc-connection-url> <jdbc-user>ENC(5493dd76567ee5ec269d11823973462f)</jdbc-user> <jdbc-password>ENC(56a0db3b71043054269d11823973462f)</jdbc-password> <bindings-table-name>BINDINGS_TABLE</bindings-table-name> <message-table-name>MESSAGE_TABLE</message-table-name> <large-message-table-name>LARGE_MESSAGES_TABLE</large-message-table-name> <page-store-table-name>PAGE_STORE_TABLE</page-store-table-name> <node-manager-store-table-name>NODE_MANAGER_TABLE<node-manager-store-table-name> <jdbc-driver-class-name>oracle.jdbc.driver.OracleDriver</jdbc-driver-class-name> <jdbc-network-timeout>10000</jdbc-network-timeout> <jdbc-lock-renew-period>2000</jdbc-lock-renew-period> <jdbc-lock-expiration>15000</jdbc-lock-expiration> <jdbc-journal-sync-period>5</jdbc-journal-sync-period> </database-store> </store> </core> </configuration>", "<configuration> <core> <ha-policy> <shared-store> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> </shared-store> </ha-policy> </core> </configuration>", "<configuration> <core> <paging-directory>../sharedstore/data/paging</paging-directory> <bindings-directory>../sharedstore/data/bindings</bindings-directory> <journal-directory>../sharedstore/data/journal</journal-directory> <large-messages-directory>../sharedstore/data/large-messages</large-messages-directory> </core> </configuration>", "<configuration> <core> <ha-policy> <shared-store> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </shared-store> </ha-policy> </core> </configuration>", "<configuration> <core> <ha-policy> <replication> <master> <check-for-live-server>true</check-for-live-server> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> </master> </replication> </ha-policy> </core> </configuration>", "<configuration> <core> <ha-policy> <replication> <slave> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> <group-name>my-group-1</group-name> <vote-on-replication-failure>true</vote-on-replication-failure> </slave> </replication> </ha-policy> </core> </configuration>", "<configuration> <core> <ha-policy> <replication> <slave> <vote-retries>12</vote-retries> <vote-retry-wait>5000</vote-retry-wait> </slave> </replication> </ha-policy> </core> </configuration>", "<configuration> <core> <ha-policy> <live-only> </live-only> </ha-policy> </core> </configuration>", "<live-only> <scale-down> <connectors> <connector-ref>broker1-connector</connector-ref> </connectors> </scale-down> </live-only>", "<live-only> <scale-down> <discovery-group-ref discovery-group-name=\"my-discovery-group\"/> </scale-down> </live-only>", "<live-only> <scale-down> <group-name>my-group-name</group-name> </scale-down> </live-only>", "<configuration> <core> <ha-policy> <shared-store> <colocated> <request-backup>true</request-backup> <max-backups>1</max-backups> <backup-request-retries>-1</backup-request-retries> <backup-request-retry-interval>5000</backup-request-retry-interval/> <backup-port-offset>150</backup-port-offset> <excludes> <connector-ref>remote-connector</connector-ref> </excludes> <master> <failover-on-shutdown>true</failover-on-shutdown> </master> <slave> <failover-on-shutdown>true</failover-on-shutdown> <allow-failback>true</allow-failback> <restart-backup>true</restart-backup> </slave> </colocated> </shared-store> </ha-policy> </core> </configuration>", "<configuration> <core> <cluster-connections> <cluster-connection name=\"my-cluster\"> <message-load-balancing>ON_DEMAND</message-load-balancing> </cluster-connection> </cluster-connections> </core> </configuration>", "<configuration> <core> <address-settings> <address-setting match=\"my.queue\"> <redistribution-delay>5000</redistribution-delay> </address-setting> </address-settings> </core> </configuration>", "<configuration> <core> <grouping-handler name=\"my-grouping-handler\"> <type>LOCAL</type> <timeout>10000</timeout> </grouping-handler> </core> </configuration>", "<configuration> <core> <grouping-handler name=\"my-grouping-handler\"> <type>REMOTE</type> <timeout>5000</timeout> </grouping-handler> </core> </configuration>" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/configuring_amq_broker/setting-up-broker-cluster-configuring
C.5. CacheManager
C.5. CacheManager org.infinispan.manager.DefaultCacheManager The CacheManager component acts as a manager, factory, and container for caches in the system. Table C.8. Attributes Name Description Type Writable cacheManagerStatus The status of the cache manager instance. String No clusterMembers Lists members in the cluster. String No clusterName Cluster name. String No clusterSize Size of the cluster in the number of nodes. int No createdCacheCount The total number of created caches, including the default cache. String No definedCacheCount The total number of defined caches, excluding the default cache. String No definedCacheNames The defined cache names and their statuses. The default cache is not included in this representation. String No name The name of this cache manager. String No nodeAddress The network address associated with this instance. String No physicalAddresses The physical network addresses associated with this instance. String No runningCacheCount The total number of running caches, including the default cache. String No version Infinispan version. String No globalConfigurationAsProperties Global configuration properties Properties No Table C.9. Operations Name Description Signature startCache Starts the default cache associated with this cache manager. void startCache() startCache Starts a named cache from this cache manager. void startCache (String p0) 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/cachemanager
Preface
Preface Open Java Development Kit (Red Hat build of OpenJDK) is a free and open-source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in three LTS versions: Red Hat build of OpenJDK 8u, Red Hat build of OpenJDK 11u, and Red Hat build of OpenJDK 17u. Packages for Eclipse Temurin are made available on Microsoft Windows and on multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/getting_started_with_eclipse_temurin/pr01
Chapter 2. Creating definitions
Chapter 2. Creating definitions When creating an automated rule definition, you can configure numerous options. Cryostat uses an automated rule to apply rules to any JVM targets that match regular expressions defined in the matchExpression string expression. You can apply Red Hat OpenShift labels or annotations as criteria for a matchExpression definition. After you specify a rule definition for your automated rule, you do not need to re-add or restart matching targets. If you have defined matching targets, you can immediately activate a rule definition. If you want to reuse an existing automated rule definition, you can upload your definition in JSON format to Cryostat. Note From Cryostat 3.0 onward, you must use Common Expression Language (CEL) syntax when defining match expressions in automated rules. In releases, you could use JavaScript syntax to define match expressions. 2.1. Enabling or disabling existing automated rules You can enable or disable existing automated rules by using a toggle switch on the Cryostat web console. Prerequisites Logged in to the Cryostat web console. Created an automated rule. Procedure From the Cryostat web console, click Automated Rules . The Automated Rules window opens and displays your automated rule in a table. Figure 2.1. Example of match expression output from completing an automated rule In the Enabled column, view the Enabled status of the listed automated rules. Depending on the status, choose one of the following actions: To enable the automated rule, click the toggle switch to On . Cryostat immediately evaluates each application that you defined in the automated rule against its match expression. If a match expression applies to an application, Cryostat starts a JFR recording that monitors the performance of the application. To disable the automated rule, click the toggle switch to Off . The Disable your Automated Rule window opens. To disable the selected automated rule, click Disable . If you want to also stop any active recordings that were created by the selected rule, select Clean then click Disable . 2.2. Creating an automated rule definition While creating an automated rule on the Cryostat web console, you can specify the match expression that Cryostat uses to select all the applications. Then, Cryostat starts a new recording by using a JFR event template that was defined by the rule. If you previously created an automated rule and Cryostat identifies a new target application, Cryostat tests if the new application instance matches the expression and starts a new recording by using the associated event template. Prerequisites Created a Cryostat instance in your Red Hat OpenShift project. Created a Java application. Installed Cryostat 3.0 on Red Hat OpenShift by using the OperatorHub option. Logged in to your Cryostat web console. Procedure In the navigation menu on the Cryostat web console, click Automated Rules . The Automated Rules window opens. Click Create . A Create window opens. Figure 2.2. The Create window (Graph View) for an automated rule Enter a rule name in the Name field. In the Match Expression field, specify the match expression details. Note Select the question mark icon to view suggested syntax in a Match Expression Hint snippet. In the Match Expression Visualizer panel, the Graph View option highlights the target JVMs that are matched. Unmatched target JVMs are greyed out. Optional: In the Match Expression Visualizer panel, you can also click List View , which displays the matched target JVMs as expandable rows. Figure 2.3. The Create window (List View) for an automated rule From the Template list, select an event template. To create your automated rule, click Create . The Automated Rules window opens and displays your automated rule in a table. Figure 2.4. Example of match expression output from completing an automated rule If a match expression applies to an application, Cryostat starts a JFR recording that monitors the performance of the application. Optional: You can download an automated rule by clicking Download from the automated rule's overflow menu. You can then configure a rule definition in your preferred text editor or make additional copies of the file on your local file system. 2.3. Cryostat Match Expression Visualizer panel You can use the Match Expression Visualizer panel on the web console to view information in a JSON structure for your selected target JVM application. You can choose to display the information in a Graph View or a List View mode. The Graph View highlights the target JVMs that are matched. Unmatched target JVMs are greyed out. The List View displays the matched target JVM as expandable rows. To view details about a matched target JVM, select the target JVM that is highlighted. In the window that appears, information specific to the metadata for your application is shown in the Details tab. You can use any of this information as syntax in your match expression. A match expression is a rule definition parameter that you can specify for your automated rule. After you specify match expressions and created the automated rule, Cryostat immediately evaluates each application that you defined in the automated rule against its match expression. If a match expression applies to an application, Cryostat starts a JFR recording that monitors the performance of the application. 2.4. Uploading an automated rule in JSON You can reuse an existing automated rule by uploading it to the Cryostat web console, so that you can quickly start monitoring a running Java application. Prerequisites Created a Cryostat instance in your project. See Installing Cryostat on OpenShift using an operator (Installing Cryostat). Created a Java application. Created an automated rules file in JSON format. Logged in to your Cryostat web console. Procedure In the navigation menu on the Cryostat web console, click Automated Rules . The Automated Rules window opens. Click the file upload icon, which is located beside the Create button. Figure 2.5. The automated rules upload button The Upload Automated Rules window opens. Click Upload and locate your automated rules files on your local system. You can upload one or more files to Cryostat. Alternatively, you can drag files from your file explorer tool and drop them into the JSON File field on your web console. Important The Upload Automated Rules function only accepts files in JSON format. Figure 2.6. A window prompt where you can upload JSON files that contains your automated rules configuration Optional: If you need to remove a file from the Upload Automated Rules function, click the X icon on the selected file. Figure 2.7. Example of uploaded JSON files Click Submit . 2.5. Metadata labels When you create an automated rule to enable JFR to continuously monitor a running target application, the automated rule automatically generates a metadata label. This metadata label indicates the name of the automated rule that generates the JFR recording. After you archive the recording, you can run a query on the metadata label to locate the automated rule that generated the recording. Cryostat preserves metadata labels for the automated rule in line with the lifetime of the archived recording. Additional resources Creating definitions Archiving JDK Flight Recorder (JFR) recordings (Using Cryostat to manage a JFR recording)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/using_automated_rules_on_cryostat/assembly_creating-definitions_con_overview-automated-rules
21.3.2. Hostname Formats
21.3.2. Hostname Formats The host(s) can be in the following forms: Single machine - A fully qualified domain name (that can be resolved by the server), hostname (that can be resolved by the server), or an IP address. Series of machines specified with wildcards - Use the * or ? character to specify a string match. Wildcards are not to be used with IP addresses; however, they may accidentally work if reverse DNS lookups fail. When specifying wildcards in fully qualified domain names, dots (.) are not included in the wildcard. For example, *.example.com includes one.example.com but does not include one.two.example.com. IP networks - Use a.b.c.d/z , where a.b.c.d is the network and z is the number of bits in the netmask (for example 192.168.0.0/24). Another acceptable format is a.b.c.d/netmask , where a.b.c.d is the network and netmask is the netmask (for example, 192.168.100.8/255.255.255.0). Netgroups - In the format @group-name , where group-name is the NIS netgroup name.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/exporting_nfs_file_systems-hostname_formats
Chapter 4. View OpenShift Data Foundation Topology
Chapter 4. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage Data Foundation Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_amazon_web_services/viewing-odf-topology_mcg-verify
Machine APIs
Machine APIs OpenShift Container Platform 4.17 Reference guide for machine APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/machine_apis/index
20.3.4. Generating Key Pairs
20.3.4. Generating Key Pairs If you do not want to enter your password every time you use ssh , scp , or sftp to connect to a remote machine, you can generate an authorization key pair. Keys must be generated for each user. To generate keys for a user, use the following steps as the user who wants to connect to remote machines. If you complete the steps as root, only root will be able to use the keys. Starting with OpenSSH version 3.0, ~/.ssh/authorized_keys2 , ~/.ssh/known_hosts2 , and /etc/ssh_known_hosts2 are obsolete. SSH Protocol 1 and 2 share the ~/.ssh/authorized_keys , ~/.ssh/known_hosts , and /etc/ssh/ssh_known_hosts files. Red Hat Enterprise Linux 4 uses SSH Protocol 2 and RSA keys by default. Note If you reinstall and want to save your generated key pair, backup the .ssh directory in your home directory. After reinstalling, copy this directory back to your home directory. This process can be done for all users on your system, including root. 20.3.4.1. Generating an RSA Key Pair for Version 2 Use the following steps to generate an RSA key pair for version 2 of the SSH protocol. This is the default starting with OpenSSH 2.9. To generate an RSA key pair to work with version 2 of the protocol, type the following command at a shell prompt: Accept the default file location of ~/.ssh/id_rsa . Enter a passphrase different from your account password and confirm it by entering it again. The public key is written to ~/.ssh/id_rsa.pub . The private key is written to ~/.ssh/id_rsa . Never distribute your private key to anyone. Change the permissions of the .ssh directory using the following command: Copy the contents of ~/.ssh/id_rsa.pub into the file ~/.ssh/authorized_keys on the machine to which you want to connect. If the file ~/.ssh/authorized_keys exist, append the contents of the file ~/.ssh/id_rsa.pub to the file ~/.ssh/authorized_keys on the other machine. Change the permissions of the authorized_keys file using the following command: If you are running GNOME, skip to Section 20.3.4.4, "Configuring ssh-agent with GNOME" . If you are not running the X Window System, skip to Section 20.3.4.5, "Configuring ssh-agent " .
[ "ssh-keygen -t rsa", "chmod 755 ~/.ssh", "chmod 644 ~/.ssh/authorized_keys" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Configuring_an_OpenSSH_Client-Generating_Key_Pairs
9.3. Overview of Security Methods
9.3. Overview of Security Methods Directory Server offers several methods to design an overall security policy that is adapted to specific needs. The security policy should be strong enough to prevent sensitive information from being modified or retrieved by unauthorized users, but also simple enough to administer easily. A complex security policy can lead to mistakes that either prevent people from accessing information that they need to access or, worse, allow people to modify or retrieve directory information that they should not be allowed to access. Table 9.1. Security Methods Available in Directory Server Security Method Description Authentication A means for one party to verify another's identity. For example, a client gives a password to Directory Server during an LDAP bind operation. Password policies Defines the criteria that a password must satisfy to be considered valid; for example, age, length, and syntax. Encryption Protects the privacy of information. When data is encrypted, it is scrambled in a way that only the recipient can understand. Access control Tailors the access rights granted to different directory users and provides a means of specifying required credentials or bind attributes. Account deactivation Disables a user account, group of accounts, or an entire domain so that all authentication attempts are automatically rejected. Secure connections Maintains the integrity of information by encrypting connections with TLS, Start TLS, or SASL. If information is encrypted during transmission, the recipient can determine that it was not modified during transit. Secure connections can be required by setting a minimum security strength factor. Auditing Determines if the security of the directory has been compromised; one simple auditing method is reviewing the log files maintained by the directory. SELinux Uses security policies on the Red Hat Enterprise Linux machine to restrict and control access to Directory Server files and processes. Combine any number of these tools for maintaining security in the security design, and incorporate other features of the directory service, such as replication and data distribution, to support the security design.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_a_secure_directory-overview_of_security_methods
2.2.6. Securing FTP
2.2.6. Securing FTP The File Transfer Protocol ( FTP ) is an older TCP protocol designed to transfer files over a network. Because all transactions with the server, including user authentication, are unencrypted, it is considered an insecure protocol and should be carefully configured. Red Hat Enterprise Linux provides three FTP servers. gssftpd - A Kerberos-aware xinetd -based FTP daemon that does not transmit authentication information over the network. Red Hat Content Accelerator ( tux ) - A kernel-space Web server with FTP capabilities. vsftpd - A standalone, security oriented implementation of the FTP service. The following security guidelines are for setting up the vsftpd FTP service. 2.2.6.1. FTP Greeting Banner Before submitting a user name and password, all users are presented with a greeting banner. By default, this banner includes version information useful to attackers trying to identify weaknesses in a system. To change the greeting banner for vsftpd , add the following directive to the /etc/vsftpd/vsftpd.conf file: Replace <insert_greeting_here> in the above directive with the text of the greeting message. For mutli-line banners, it is best to use a banner file. To simplify management of multiple banners, place all banners in a new directory called /etc/banners/ . The banner file for FTP connections in this example is /etc/banners/ftp.msg . Below is an example of what such a file may look like: Note It is not necessary to begin each line of the file with 220 as specified in Section 2.2.1.1.1, "TCP Wrappers and Connection Banners" . To reference this greeting banner file for vsftpd , add the following directive to the /etc/vsftpd/vsftpd.conf file: It also is possible to send additional banners to incoming connections using TCP Wrappers as described in Section 2.2.1.1.1, "TCP Wrappers and Connection Banners" .
[ "ftpd_banner= <insert_greeting_here>", "######### Hello, all activity on ftp.example.com is logged. #########", "banner_file=/etc/banners/ftp.msg" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-Server_Security-Securing_FTP
Chapter 4. Resolved issues
Chapter 4. Resolved issues See Resolved Issues for JBoss EAP 7.4 to view the list of critical issues that are resolved for this release. Additionally, be aware of the following: After completing source-to-image builds, OpenShift now clears the source directory ( /tmp/src ). As a result of this change, built images should be smaller.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/7.4.0_release_notes/resolved-issues_default
Chapter 8. Viewing and managing Quartz Schedules
Chapter 8. Viewing and managing Quartz Schedules Quartz ( http://www.quartz-scheduler.org/ ) is a richly featured, open source job scheduling library that you can integrate within most Java applications. You can use Quartz to create simple or complex schedules for executing jobs. A job is defined as a standard Java component that can execute virtually anything that you program it to do. The Fuse Console shows the Quartz tab if your Camel route deploys the camel-quartz2 component. Note that you can alternately access Quartz mbeans through the JMX tree view. Procedure In the Fuse Console, click the Quartz tab. The Quartz page includes a treeview of the Quartz Schedulers and Scheduler , Triggers , and Jobs tabs. To pause or start a scheduler, click the buttons on the Scheduler tab. Click the Triggers tab to view the triggers that determine when jobs will run. For example, a trigger can specify to start a job at a certain time of day (to the millisecond), on specified days, or repeated a specified number of times or at specific times. To filter the list of triggers select State , Group , Name , or Type from the drop-down list. You can then further filter the list by selecting or typing in the fill-on field. To pause, resume, update, or manually fire a trigger, click the options in the Action column. Click the Jobs tab to view the list of running jobs. You can sort the list by the columns in the table: Group , Name , Durable , Recover , Job ClassName , and Description .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/managing_fuse_on_openshift/fuse-console-view-quartz-all_fcopenshift
Chapter 4. Viewing installed plugins
Chapter 4. Viewing installed plugins Using the Dynamic Plugins Info front-end plugin, you can view plugins that are currently installed in your Red Hat Developer Hub application. This plugin is enabled by default. Procedure Open your Developer Hub application and click Administration . Go to the Plugins tab to view a list of installed plugins and related information.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_and_viewing_plugins_in_red_hat_developer_hub/proc-viewing-installed-plugins_assembly-install-third-party-plugins-rhdh
Chapter 2. Preparing the system for IdM server installation
Chapter 2. Preparing the system for IdM server installation The following sections list the requirements to install an Identity Management (IdM) server. Before the installation, verify your system meets these requirements. 2.1. Prerequisites You need root privileges to install an Identity Management (IdM) server on your host. 2.2. Hardware recommendations RAM is the most important hardware feature to size properly. Make sure your system has enough RAM available. Typical RAM requirements are: For 10,000 users and 100 groups: at least 4 GB of RAM and 4 GB swap space For 100,000 users and 50,000 groups: at least 16 GB of RAM and 4 GB of swap space For larger deployments, increasing RAM is more effective than increasing disk space because much of the data is stored in cache. In general, adding more RAM leads to better performance for larger deployments due to caching. In virtualized environments, memory ballooning must be disabled or the complete RAM must be reserved for the guest IdM servers. Note A basic user entry or a simple host entry with a certificate is approximately 5- 10 kB in size. 2.3. Custom configuration requirements for IdM Install an Identity Management (IdM) server on a clean system without any custom configuration for services such as DNS, Kerberos, Apache, or Directory Server. The IdM server installation overwrites system files to set up the IdM domain. IdM backs up the original system files to /var/lib/ipa/sysrestore/ . When an IdM server is uninstalled at the end of the lifecycle, these files are restored. IPv6 requirements in IdM The IdM system must have the IPv6 protocol enabled in the kernel and localhost (::1) is able to use it. If IPv6 is disabled, then the CLDAP plug-in used by the IdM services fails to initialize. Note IPv6 does not have to be enabled on the network. It is possible to enable IPv6 stack without enabling IPv6 addresses if required. Support for encryption types in IdM Red Hat Enterprise Linux (RHEL) uses Version 5 of the Kerberos protocol, which supports encryption types such as Advanced Encryption Standard (AES), Camellia, and Data Encryption Standard (DES). List of supported encryption types While the Kerberos libraries on IdM servers and clients might support more encryption types, the IdM Kerberos Distribution Center (KDC) only supports the following encryption types: aes256-cts:normal aes256-cts:special (default) aes128-cts:normal aes128-cts:special (default) aes128-sha2:normal aes128-sha2:special aes256-sha2:normal aes256-sha2:special camellia128-cts-cmac:normal camellia128-cts-cmac:special camellia256-cts-cmac:normal camellia256-cts-cmac:special RC4 encryption types are disabled by default The following RC4 encryption types have been deprecated and disabled by default in RHEL 8, as they are considered less secure than the newer AES-128 and AES-256 encryption types: arcfour-hmac:normal arcfour-hmac:special For more information about manually enabling RC4 support for compatibility with legacy Active Directory environments, see Ensuring support for common encryption types in AD and RHEL . Support for DES and 3DES encryption has been removed Due to security reasons, support for the DES algorithm was deprecated in RHEL 7. The recent rebase of Kerberos packages in RHEL 8.3.0 removes support for single-DES (DES) and triple-DES (3DES) encryption types from RHEL 8. Note Standard RHEL 8 IdM installations do not use DES or 3DES encryption types by default and are unaffected by the Kerberos upgrade. If you manually configured any services or users to only use DES or 3DES encryption (for example, for legacy clients), you might experience service interruptions after updating to the latest Kerberos packages, such as: Kerberos authentication errors unknown enctype encryption errors KDCs with DES-encrypted Database Master Keys ( K/M ) fail to start Do not use DES or 3DES encryption in your environment. Note You only need to disable DES and 3DES encryption types if you configured your environment to use them. Support for system-wide cryptographic policies in IdM IdM uses the DEFAULT system-wide cryptographic policy. This policy offers secure settings for current threat models. It allows the TLS 1.2 and 1.3 protocols, as well as the IKEv2 and SSH2 protocols. The RSA keys and Diffie-Hellman parameters are accepted if they are at least 2048 bits long. This policy does not allow DES, 3DES, RC4, DSA, TLS v1.0, and other weaker algorithms. Note You cannot install an IdM server while using the FUTURE system-wide cryptographic policy. When installing an IdM server, ensure you are using the DEFAULT system-wide cryptographic policy. Additional Resources System-wide cryptographic policies man IPV6(7) 2.4. FIPS compliance With RHEL 8.3.0 or later, you can install a new IdM server or replica on a system with the Federal Information Processing Standard (FIPS) 140 mode enabled. To install IdM in FIPS mode, first enable FIPS mode on the host, then install IdM. The IdM installation script detects if FIPS is enabled and configures IdM to only use encryption types that are compliant with the FIPS 140 standard: aes256-cts:normal aes256-cts:special aes128-cts:normal aes128-cts:special aes128-sha2:normal aes128-sha2:special aes256-sha2:normal aes256-sha2:special For an IdM environment to be FIPS-compliant, all IdM replicas must have FIPS mode enabled. You should enable FIPS mode in IdM clients as well, especially if you might promote those clients to IdM replicas. Ultimately, it is up to administrators to determine how they meet FIPS requirements; Red Hat does not enforce FIPS criteria. Migration to FIPS-compliant IdM You cannot migrate an existing IdM installation from a non-FIPS environment to a FIPS-compliant installation. This is not a technical problem but a legal and regulatory restriction. To operate a FIPS-compliant system, all cryptographic key material must be created in FIPS mode. Furthermore, the cryptographic key material must never leave the FIPS environment unless it is securely wrapped and never unwrapped in non-FIPS environments. If your scenario requires a migration of a non-FIPS IdM realm to a FIPS-compliant one, you must: create a new IdM realm in FIPS mode perform data migration from the non-FIPS realm to the new FIPS-mode realm with a filter that blocks all key material The migration filter must block: KDC master key, keytabs, and all related Kerberos key material User passwords All certificates including CA, service, and user certificates OTP tokens SSH keys and fingerprints DNSSEC KSK and ZSK All vault entries AD trust-related key material Effectively, the new FIPS installation is a different installation. Even with rigorous filtering, such a migration may not pass a FIPS 140 certification. Your FIPS auditor may flag this migration. Additional Resources For more information about the FIPS 140 implementation in the RHEL operating system, see Federal Information Processing Standards 140 and FIPS mode in the RHEL Security Hardening document. 2.5. Support for cross-forest trust with FIPS mode enabled To establish a cross-forest trust with an Active Directory (AD) domain while FIPS mode is enabled, you must meet the following requirements: IdM servers are on RHEL 8.4.0 or later. You must authenticate with an AD administrative account when setting up a trust. You cannot establish a trust using a shared secret while FIPS mode is enabled. Important RADIUS authentication is not FIPS-compliant as the RADIUS protocol uses the MD5 hash function to encrypt passwords between client and server and, in FIPS mode, OpenSSL disables the use of the MD5 digest algorithm. However, if the RADIUS server is running on the same host as the IdM server, you can work around the problem and enable MD5 by performing the steps described in the Red Hat Knowledgebase solution How to configure FreeRADIUS authentication in FIPS mode . Additional Resources For more information about FIPS mode in the RHEL operating system, see Installing the system in FIPS mode in the Security Hardening document. For more details about the FIPS 140-2 standard, see the Security Requirements for Cryptographic Modules on the National Institute of Standards and Technology (NIST) web site. 2.6. Time service requirements for IdM The following sections discuss using chronyd to keep your IdM hosts in sync with a central time source: 2.6.1. How IdM uses chronyd for synchronization You can use chronyd to keep your IdM hosts in sync with a central time source as described here. Kerberos, the underlying authentication mechanism in IdM, uses time stamps as part of its protocol. Kerberos authentication fails if the system time of an IdM client differs by more than five minutes from the system time of the Key Distribution Center (KDC). To ensure that IdM servers and clients stay in sync with a central time source, IdM installation scripts automatically configure chronyd Network Time Protocol (NTP) client software. If you do not pass any NTP options to the IdM installation command, the installer searches for _ntp._udp DNS service (SRV) records that point to the NTP server in your network and configures chrony with that IP address. If you do not have any _ntp._udp SRV records, chronyd uses the configuration shipped with the chrony package. Note Because ntpd has been deprecated in favor of chronyd in RHEL 8, IdM servers are no longer configured as Network Time Protocol (NTP) servers and are only configured as NTP clients. The RHEL 7 NTP Server IdM server role has also been deprecated in RHEL 8. Additional resources Implementation of NTP Using the Chrony suite to configure NTP 2.6.2. List of NTP configuration options for IdM installation commands You can use chronyd to keep your IdM hosts in sync with a central time source. You can specify the following options with any of the IdM installation commands ( ipa-server-install , ipa-replica-install , ipa-client-install ) to configure chronyd client software during setup. Table 2.1. List of NTP configuration options for IdM installation commands Option Behavior --ntp-server Use it to specify one NTP server. You can use it multiple times to specify multiple servers. --ntp-pool Use it to specify a pool of multiple NTP servers resolved as one hostname. -N , --no-ntp Do not configure, start, or enable chronyd . Additional resources Implementation of NTP Using the Chrony suite to configure NTP 2.6.3. Ensuring IdM can reference your NTP time server You can verify if you have the necessary configurations in place for IdM to be able to synchronize with your Network Time Protocol (NTP) time server. Prerequisites You have configured an NTP time server in your environment. In this example, the hostname of the previously configured time server is ntpserver.example.com . Procedure Perform a DNS service (SRV) record search for NTP servers in your environment. If the dig search does not return your time server, add a _ntp._udp SRV record that points to your time server on port 123 . This process depends on your DNS solution. Verification Verify that DNS returns an entry for your time server on port 123 when you perform a search for _ntp._udp SRV records. Additional resources Implementation of NTP Using the Chrony suite to configure NTP 2.7. Meeting DNS host name and DNS requirements for IdM The host name and DNS requirements for server and replica systems are outlined below and also how to verify that the systems meet the requirements. Warning DNS records are vital for nearly all Identity Management (IdM) domain functions, including running LDAP directory services, Kerberos, and Active Directory integration. Be extremely cautious and ensure that: You have a tested and functional DNS service available The service is properly configured This requirement applies to all IdM servers, both with and without integrated DNS. Verify the server host name The host name must be a fully qualified domain name, such as server.idm.example.com . Important Do not use single-label domain names, for example .company : the IdM domain must be composed of one or more subdomains and a top level domain, for example example.com or company.example.com . The fully qualified domain name must meet the following conditions: It is a valid DNS name, which means only numbers, alphabetic characters, and hyphens (-) are allowed. Other characters, such as underscores (_), in the host name cause DNS failures. It is all lower-case. No capital letters are allowed. It does not resolve to the loopback address. It must resolve to the system's public IP address, not to 127.0.0.1 . To verify the host name, use the hostname utility on the system where you want to install: The output of hostname must not be localhost or localhost6 . Verify the forward and reverse DNS configuration Obtain the IP address of the server. The ip addr show command displays both the IPv4 and IPv6 addresses. In the following example, the relevant IPv6 address is 2001:DB8::1111 because its scope is global: Verify the forward DNS configuration using the dig utility. Run the command dig +short server.idm.example.com A . The returned IPv4 address must match the IP address returned by ip addr show : Run the command dig +short server.idm.example.com AAAA . If it returns an address, it must match the IPv6 address returned by ip addr show : Note If dig does not return any output for the AAAA record, it does not indicate incorrect configuration. No output only means that no IPv6 address is configured in DNS for the system. If you do not intend to use the IPv6 protocol in your network, you can proceed with the installation in this situation. Verify the reverse DNS configuration (PTR records). Use the dig utility and add the IP address. If the commands below display a different host name or no host name, the reverse DNS configuration is incorrect. Run the command dig +short -x IPv4_address . The output must display the server host name. For example: If the command dig +short -x server.idm.example.com AAAA in the step returned an IPv6 address, use dig to query the IPv6 address too. The output must display the server host name. For example: Note If dig +short server.idm.example.com AAAA in the step did not display any IPv6 address, querying the AAAA record does not output anything. In this case, this is normal behavior and does not indicate incorrect configuration. Warning If a reverse DNS (PTR record) search returns multiple host names, httpd and other software associated with IdM may show unpredictable behavior. Red Hat strongly recommends configuring only one PTR record per IP. Verify the standards-compliance of DNS forwarders (required for integrated DNS only) Ensure that all DNS forwarders you want to use with the IdM DNS server comply with the Extension Mechanisms for DNS (EDNS0). To do this, inspect the output of the following command for each forwarder separately: The expected output displayed by the command contains the following information: Status: NOERROR Flags: ra If either of these items is missing from the output, inspect the documentation for your DNS forwarder and verify that EDNS0 is supported and enabled. Determine your DNS Security Extensions (DNSSEC) policy (required for integrated DNS only) Warning DNSSEC is only available as Technology Preview in IdM. DNSSEC validation is enabled in the IdM-integrated DNS server by default. If you do not require the DNSSEC feature in your IdM deployment, add the --no-dnssec-validation option to the ipa-server-install --setup-dns and ipa-replica-install --setup-dns commands when installing the primary IdM server and the IdM replicas. If you do want to use DNSSEC, ensure that all DNS forwarders you want to use with the IdM DNS server comply with the DNSSEC standard. To do this, inspect the output of the following command for each forwarder separately: The expected output displayed by the command contains the following information: Status: NOERROR Flags: ra EDNS flags: do The RRSIG record must be present in the ANSWER section If any of these items is missing from the output, inspect the documentation for your DNS forwarder and verify that DNSSEC is supported and enabled. In the latest versions of the BIND server, the dnssec-enable yes; option must be set in the /etc/named.conf file. Example of the expected output produced by dig +dnssec : Note On already deployed IdM servers, you can check whether DNSSEC validation is enabled by searching for the dnssec-validation boolean option in the /etc/named/ipa-options-ext.conf file. Verify the /etc/hosts file Verify that the /etc/hosts file fulfills one of the following conditions: The file does not contain an entry for the host. It only lists the IPv4 and IPv6 localhost entries for the host. The file contains an entry for the host and the file fulfills all the following conditions: The first two entries are the IPv4 and IPv6 localhost entries. The entry specifies the IdM server IPv4 address and host name. The FQDN of the IdM server comes before the short name of the IdM server. The IdM server host name is not part of the localhost entry. The following is an example of a correctly configured /etc/hosts file: 2.8. Port requirements for IdM Identity Management (IdM) uses several ports to communicate with its services. These ports must be open and available for incoming connections to the IdM server for IdM to work. They must not be currently used by another service or blocked by a firewall . Table 2.2. IdM ports Service Ports Protocol HTTP/HTTPS 80, 443 TCP LDAP/LDAPS 389, 636 TCP Kerberos 88, 464 TCP and UDP DNS 53 TCP and UDP (optional) Note IdM uses ports 80 and 389. This is a secure practice because of the following safeguards: IdM normally redirects requests that arrive on port 80 to port 443. Port 80 (HTTP) is only used to provide Online Certificate Status Protocol (OCSP) responses and Certificate Revocation Lists (CRL). Both are digitally signed and therefore secured against man-in-the-middle attacks. Port 389 (LDAP) uses STARTTLS and Generic Security Services API (GSSAPI) for encryption. In addition, ports 8080 and 8443 are used internally by pki-tomcat and leave them blocked in the firewall to prevent their use by other services. Port 749 is used for remote management of the Kerberos server and only open it if you intend to use remote management. Table 2.3. firewalld services Service name For details, see: freeipa-4 /usr/lib/firewalld/services/freeipa-4.xml dns /usr/lib/firewalld/services/dns.xml 2.9. Opening the ports required by IdM You can open the required ports that IdM uses to communicate with its services. Procedure Verify that the firewalld service is running. To find out if firewalld is currently running: To start firewalld and configure it to start automatically when the system boots: Open the required ports using the firewall-cmd utility. Choose one of the following options: Add the individual ports to the firewall by using the firewall-cmd --add-port command. For example, to open the ports in the default zone: Add the firewalld services to the firewall by using the firewall-cmd --add-service command. For example, to open the ports in the default zone: For details on using firewall-cmd to open ports on a system, see the firewall-cmd (1) man page. Reload the firewall-cmd configuration to ensure that the change takes place immediately: Note that reloading firewalld on a system in production can cause DNS connection time outs. If required, to avoid the risk of time outs and to make the changes persistent on the running system, use the --runtime-to-permanent option of the firewall-cmd command, for example: Verification Log in to a host on the client subnet and use the nmap or nc utilities to connect to the opened ports or run a port scan. For example, to scan the ports that are required for TCP traffic: To scan the ports that are required for UDP traffic: Note You also have to open network-based firewalls for both incoming and outgoing traffic. 2.10. Installing packages required for an IdM server In Red Hat Enterprise Linux 8, the packages necessary for installing an Identity Management (IdM) server are shipped as a module. The IdM server module stream is called the DL1 stream, and you need to enable this stream before downloading packages from this stream. The following procedure shows how to download the packages necessary for setting up the IdM environment of your choice. Prerequisites You have a newly installed RHEL system. You have made the required repositories available: If your RHEL system is not running in the cloud, you have registered your system with the Red Hat Subscription Manager (RHSM). For details, see Subscription Central . You have also enabled the BaseOS and AppStream repositories that IdM uses: For details on how to enable and disable specific repositories using RHSM, see Subscription Central . If your RHEL system is running in the cloud, skip the registration. The required repositories are already available via the Red Hat Update Infrastructure (RHUI). You have not previously enabled an IdM module stream. Procedure Enable the idm:DL1 stream: Switch to the RPMs delivered through the idm:DL1 stream: Choose one of the following options, depending on your IdM requirements: To download the packages necessary for installing an IdM server without an integrated DNS: To download the packages necessary for installing an IdM server with an integrated DNS: To download the packages necessary for installing an IdM server that has a trust agreement with Active Directory: To download the packages from multiple profiles, for example the adtrust and dns profiles: To download the packages necessary for installing an IdM client: Important When switching to a new module stream once you have already enabled a different stream and downloaded packages from it, you need to first explicitly remove all the relevant installed content and disable the current module stream before enabling the new module stream. Trying to enable a new stream without disabling the current one results in an error. For details on how to proceed, see Switching to a later stream . Warning While it is possible to install packages from modules individually, be aware that if you install any package from a module that is not listed as "API" for that module, it is only going to be supported by Red Hat in the context of that module. For example, if you install bind-dyndb-ldap directly from the repository to use with your custom 389 Directory Server setup, any problems that you have will be ignored unless they occur for IdM, too. 2.11. Setting the correct file mode creation mask for IdM installation The Identity Management (IdM) installation process requires that the file mode creation mask ( umask ) is set to 0022 for the root account. This allows users other than root to read files created during the installation. If a different umask is set, the installation of an IdM server will display a warning. If you continue with the installation, some functions of the server will not perform properly. For example, you will be unable to install an IdM replica from this server. After the installation, you can set the umask back to its original value. Prerequisites You have root privileges. Procedure Optional: Display the current umask : Set the umask to 0022 : Optional: After the IdM installation is complete, set the umask back to its original value: 2.12. Ensuring that fapolicyd rules do not block IdM installation and operation If you are using the fapolicyd software framework on your RHEL host to control the execution of applications based on a user-defined policy, the installation of the Identity Management (IdM) server can fail. As the installation and operation requires the Java program to complete successfully, ensure that Java and Java classes are not blocked by any fapolicyd rules. For more information, see the Red Hat Knowledgebase solution fapolicy restrictions causing IdM installation failures . 2.13. Options for the IdM installation commands Commands such as ipa-server-install , ipa-replica-install , ipa-dns-install and ipa-ca-install have numerous options you can use to supply additional information for an interactive installation. You can also use these options to script an unattended installation. The following tables display some of the most common options for different components. Options for a specific component are shared across multiple commands. For example, you can use the --ca-subject option with both the ipa-ca-install and ipa-server-install commands. For an exhaustive list of options, see the ipa-server-install(1) , ipa-replica-install(1) , ipa-dns-install(1) and ipa-ca-install(1) man pages. Table 2.4. General options: available for ipa-server-install and ipa-replica-install Argument Description -d , --debug Enables debug logging for more verbose output. -U , --unattended Enables an unattended installation session that does not prompt for user input. --hostname= server.idm.example.com The fully-qualified domain name of the IdM server machine. Only numbers, lowercase alphabetic characters, and hyphens (-) are allowed. --ip-address 127.0.0.1 Specifies the IP address of the server. This option only accepts IP addresses associated with the local interface. --dirsrv-config-file <LDIF_file_name> The path to an LDIF file used to modify the configuration of the directory server instance. -n example.com The name of the LDAP server domain to use for the IdM domain. This is usually based on the IdM server's hostname. -p <directory_manager_password> The password of the superuser, cn=Directory Manager , for the LDAP service. -a <ipa_admin_password> The password for the admin IdM administrator account to authenticate to the Kerberos realm. For ipa-replica-install , use -w instead. -r <KERBEROS_REALM_NAME> The name of the Kerberos realm to create for the IdM domain in uppercase, such as EXAMPLE.COM . For ipa-replica-install , this specifies the name of a Kerberos realm of an existing IdM deployment. --setup-dns Tells the installation script to set up a DNS service within the IdM domain. --setup-ca Install and configure a CA on this replica. If a CA is not configured, certificate operations are forwarded to another replica with a CA installed. For ipa-server-install , a CA is installed by default and you do not need to use this option. Table 2.5. CA options: available for ipa-ca-install and ipa-server-install Argument Description --ca-subject= <SUBJECT> Specifies the CA certificate subject Distinguished Name (default: CN=Certificate Authority,O=REALM.NAME). Relative Distinguished Names (RDN) are in LDAP order, with the most specific RDN first. --subject-base= <SUBJECT> Specifies the subject base for certificates issued by IdM (default O=REALM.NAME). Relative Distinguished Names (RDN) are in LDAP order, with the most specific RDN first. --external-ca Generates a certificate signing request to be signed by an external CA. --ca-signing-algorithm= <ALGORITHM> Specifies the signing algorithm of the IdM CA certificate. Possible values are SHA1withRSA, SHA256withRSA, SHA512withRSA . The default is SHA256withRSA. Use this option with --external-ca if the external CA does not support the default signing algorithm. Table 2.6. DNS options: available for ipa-dns-install, or for ipa-server-install and ipa-replica-install when using --setup-dns Argument Description --forwarder= 192.0.2.1 Specifies a DNS forwarder to use with the DNS service. To specify more than one forwarder, use this option multiple times. --no-forwarders Uses root servers with the DNS service instead of forwarders. --no-reverse Does not create a reverse DNS zone when the DNS domain is set up. If a reverse DNS zone is already configured, then that existing reverse DNS zone is used. If this option is not used, then the default value is true . This instructs the installation script to configure reverse DNS. Additional resources ipa-server-install(1) and ipa-replica-install(1) man pages on your system ipa-dns-install(1) and ipa-ca-install(1) man pages on your system
[ "[user@server ~]USD dig +short -t SRV _ntp._udp.example.com 0 100 123 ntpserver .example.com.", "[user@server ~]USD dig +short -t SRV _ntp._udp.example.com 0 100 123 ntpserver .example.com.", "hostname server.idm.example.com", "ip addr show 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:1a:4a:10:4e:33 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1 /24 brd 192.0.2.255 scope global dynamic eth0 valid_lft 106694sec preferred_lft 106694sec inet6 2001:DB8::1111 /32 scope global dynamic valid_lft 2591521sec preferred_lft 604321sec inet6 fe80::56ee:75ff:fe2b:def6/64 scope link valid_lft forever preferred_lft forever", "dig +short server.idm.example.com A 192.0.2.1", "dig +short server.idm.example.com AAAA 2001:DB8::1111", "dig +short -x 192.0.2.1 server.idm.example.com", "dig +short -x 2001:DB8::1111 server.idm.example.com", "dig @ IP_address_of_the_DNS_forwarder . SOA", "dig +dnssec @ IP_address_of_the_DNS_forwarder . SOA", ";; ->>HEADER<<- opcode: QUERY, status: NOERROR , id: 48655 ;; flags: qr rd ra ad; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags: do; udp: 4096 ;; ANSWER SECTION: . 31679 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2015100701 1800 900 604800 86400 . 31679 IN RRSIG SOA 8 0 86400 20151017170000 20151007160000 62530 . GNVz7SQs [...]", "127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.0.2.1 server.idm.example.com server 2001:DB8::1111 server.idm.example.com server", "systemctl status firewalld.service", "systemctl start firewalld.service systemctl enable firewalld.service", "firewall-cmd --permanent --add-port={80/tcp,443/tcp,389/tcp,636/tcp,88/tcp,88/udp,464/tcp,464/udp,53/tcp,53/udp}", "firewall-cmd --permanent --add-service={freeipa-4,dns}", "firewall-cmd --reload", "firewall-cmd --runtime-to-permanent", "nmap -p 80,443,389,636,88,464,53 server.idm.example.com [...] PORT STATE SERVICE 53/tcp open domain 80/tcp open http 88/tcp open kerberos-sec 389/tcp open ldap 443/tcp open https 464/tcp open kpasswd5 636/tcp open ldapssl", "nmap -sU -p 88,464,53 server.idm.example.com [...] PORT STATE SERVICE 53/udp open domain 88/udp open|filtered kerberos-sec 464/udp open|filtered kpasswd5", "subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms", "yum module enable idm:DL1", "yum distro-sync", "yum module install idm:DL1/server", "yum module install idm:DL1/dns", "yum module install idm:DL1/adtrust", "yum module install idm:DL1/{dns,adtrust}", "yum module install idm:DL1/client", "umask 0027", "umask 0022", "umask 0027" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/installing_identity_management/preparing-the-system-for-ipa-server-installation_installing-identity-management
Appendix A. Using your subscription
Appendix A. Using your subscription Service Registry is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing your account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading ZIP and TAR files To access ZIP or TAR files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat Integration entries in the Integration and Automation category. Select the desired Service Registry product. The Software Downloads page opens. Click the Download link for your component. Revised on 2024-03-22 13:18:41 UTC
null
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/migrating_service_registry_deployments/using_your_subscription
8.202. rp-pppoe
8.202. rp-pppoe 8.202.1. RHEA-2014:0424 - rp-pppoe enhancement update Updated rp-pppoe packages that add one enhancement are now available for Red Hat Enterprise Linux 6. The rp-pppoe packages provide the Roaring Penguin PPPoE (Point-to-Point Protocol over Ethernet) client, a user-mode program that does not require any kernel modifications. This client is fully compliant with RFC 2516, the official PPPoE specification. Enhancement BZ# 1009268 In Red Hat Enterprise Linux 6, the adsl-setup script in the rp-pppoe packages was renamed to pppoe-setup. To assist users migrating from Red Hat Enterprise Linux 5 to Red Hat Enterprise Linux 6, a symbolic link has been created to allow the old script name to continue to work. Users of rp-pppoe are advised to upgrade to these updated packages, which add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/rp-pppoe
Chapter 15. User and Group Schema
Chapter 15. User and Group Schema When a user entry is created, it is automatically assigned certain LDAP object classes which, in turn, make available certain attributes. LDAP attributes are the way that information is stored in the directory. (This is discussed in detail in the Directory Server Deployment Guide and the Directory Server Schema Reference .) Table 15.1. Default Identity Management User Object Classes Object Classes Description ipaobject ipasshuser IdM object classes person organizationalperson inetorgperson inetuser posixAccount Person object classes krbprincipalaux krbticketpolicyaux Kerberos object classes mepOriginEntry Managed entries (template) object classes A number of attributes are available to user entries. Some are set manually and some are set based on defaults if a specific value is not set. There is also an option to add any attributes available in the object classes in Table 15.1, "Default Identity Management User Object Classes" , even if there is not a UI or command-line argument for that attribute. Additionally, the values generated or used by the default attributes can be configured, as in Section 15.4, "Specifying Default User and Group Attributes" . Table 15.2. Default Identity Management User Attributes UI Field Command-Line Option Required, Optional, or Default [a] User login username Required First name --first Required Last name --last Required Full name --cn Optional Display name --displayname Optional Initials --initials Default Home directory --homedir Default GECOS field --gecos Default Shell --shell Default Kerberos principal --principal Default Email address --email Optional Password --password [b] Optional User ID number --uid Default Group ID number --gidnumber Default Street address --street Optional City --city Optional State/Province --state Optional Zip code --postalcode Optional Telephone number --phone Optional Mobile telephone number --mobile Optional Pager number --pager Optional Fax number --fax Optional Organizational unit --orgunit Optional Job title --title Optional Manager --manager Optional Car license --carlicense Optional --noprivate Optional SSH Keys --sshpubkey Optional Additional attributes --addattr Optional Department Number --departmentnumber Optional Employee Number --employeenumber Optional Employee Type --employeetype Optional Preferred Language --preferredlanguage Optional [a] Required attributes must be set for every entry. Optional attributes may be set, while default attributes are automatically added with a predefined value unless a specific value is given. [b] The script prompts for the new password, rather than accepting a value with the argument. 15.1. About Changing the Default User and Group Schema It is possible to add or change the object classes and attributes used for user and group entries ( Chapter 15, User and Group Schema ). The IdM configuration provides some validation when object classes are changed: All of the object classes and their specified attributes must be known to the LDAP server. All default attributes that are configured for the entry must be supported by the configured object classes. There are limits to the IdM schema validation, however. Most important, the IdM server does not check that the defined user or group object classes contain all of the required object classes for IdM entries. For example, all IdM entries require the ipaobject object class. However, when the user or group schema is changed, the server does not check to make sure that this object class is included; if the object class is accidentally deleted, then future entry add operations will fail. Also, all object class changes are atomic, not incremental. The entire list of default object classes has to be defined every time there is a change. For example, a company may create a custom object class to store employee information like birthdays and employment start dates. The administrator cannot simply add the custom object class to the list; he must set the entire list of current default object classes plus the new object class. The existing default object classes must always be included when the configuration is updated. Otherwise, the current settings will be overwritten, which causes serious performance problems.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/user-schema
Builds using BuildConfig
Builds using BuildConfig Red Hat OpenShift Service on AWS 4 Contains information about builds for Red Hat OpenShift Service on AWS Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/builds_using_buildconfig/index