title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/red_hat_openshift_data_foundation_architecture/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 4. Examples | Chapter 4. Examples The quickstart examples listed in the following table can be cloned or downloaded from the Camel Quarkus Examples Git repository. Number of Examples: 1 Example Description File consumer with Bindy and FTP Shows how to consume CSV files, marshal & unmarshal the data and send it onwards via FTP 4.1. Getting started with the file consumer quickstart example You can download or clone the quickstarts from the Camel Quarkus Examples Git repository. The example is in the file-bindy-ftp directory. Extract the contents of the zip file or clone the repository to a local folder, for example a new folder named quickstarts . You can run this example in development mode on your local machine from the command line. Using development mode, you can iterate quickly on integrations in development and get fast feedback on your code. Refer to the Development mode section of the Camel Quarkus User guide for more details. Note If you need to configure container resource limits or enable the Quarkus Kubernetes client to trust self signed certificates, you can find these configuration options in the src/main/resources/application.properties file. Prerequisites You have cluster admin access to the OpenShift cluster. You have access to an SFTP server and you have set the server properties (which are prefixed by ftp ) in the application properties configuration file: src/main/resources/application.properties . Procedure Use Maven to build the example application in development mode: USD cd quickstarts/file-bindy-ftp USD mvn clean compile quarkus:dev The application triggers the timer component every 10 seconds, generates some random "books" data and creates a CSV file in a temporary directory with 100 entries. The following message is displayed in the console: [route1] (Camel (camel-1) thread #3 - timer://generateBooks) Generating randomized books CSV data , the CSV file is read by a file consumer and Bindy is used to marshal the individual data rows into Book objects: [route2] (Camel (camel-1) thread #1 - file:///tmp/books) Reading books CSV data from 89A0EE24CB03A69-0000000000000000 the collection of Book objects is split into individual items and is aggregated based on the genre property: [route3] (Camel (camel-1) thread #0 - AggregateTimeoutChecker) Processed 34 books for genre 'Action' [route3] (Camel (camel-1) thread #0 - AggregateTimeoutChecker) Processed 31 books for genre 'Crime' [route3] (Camel (camel-1) thread #0 - AggregateTimeoutChecker) Processed 35 books for genre 'Horror' Finally, the aggregated book collections are unmarshalled back to CSV format and uploaded to the test FTP server. [route4] (Camel (camel-1) thread #2 - seda://processed) Uploaded books-Action-89A0EE24CB03A69-0000000000000069.csv [route4] (Camel (camel-1) thread #2 - seda://processed) Uploaded books-Crime-89A0EE24CB03A69-0000000000000069.csv [route4] (Camel (camel-1) thread #2 - seda://processed) Uploaded books-Horror-89A0EE24CB03A69-0000000000000069.csv To run the application in JVM mode, enter the following commands: USD mvn clean package -DskipTests USD java -jar target/*-runner.jar You can build and deploy the example application to OpenShift, by entering the following command: USD mvn clean package -DskipTests -Dquarkus.kubernetes.deploy=true Check that the pods are running: USDoc get pods NAME READY STATUS RESTARTS AGE camel-quarkus-examples-file-bindy-ftp-1-d72mb 1/1 Running 0 5m15s ssh-server-deployment-5f6f685658-jtr9n 1/1 Running 0 5m28s Optional: Enter the following command to monitor the application log: oc logs -f camel-quarkus-examples-file-bindy-ftp-5d48f4d85c-sjl8k Additional resources Developing Applications with Red Hat build of Apache Camel for Quarkus Camel Quarkus User guide | [
"cd quickstarts/file-bindy-ftp mvn clean compile quarkus:dev",
"[route1] (Camel (camel-1) thread #3 - timer://generateBooks) Generating randomized books CSV data",
"[route2] (Camel (camel-1) thread #1 - file:///tmp/books) Reading books CSV data from 89A0EE24CB03A69-0000000000000000",
"[route3] (Camel (camel-1) thread #0 - AggregateTimeoutChecker) Processed 34 books for genre 'Action' [route3] (Camel (camel-1) thread #0 - AggregateTimeoutChecker) Processed 31 books for genre 'Crime' [route3] (Camel (camel-1) thread #0 - AggregateTimeoutChecker) Processed 35 books for genre 'Horror'",
"[route4] (Camel (camel-1) thread #2 - seda://processed) Uploaded books-Action-89A0EE24CB03A69-0000000000000069.csv [route4] (Camel (camel-1) thread #2 - seda://processed) Uploaded books-Crime-89A0EE24CB03A69-0000000000000069.csv [route4] (Camel (camel-1) thread #2 - seda://processed) Uploaded books-Horror-89A0EE24CB03A69-0000000000000069.csv",
"mvn clean package -DskipTests java -jar target/*-runner.jar",
"mvn clean package -DskipTests -Dquarkus.kubernetes.deploy=true",
"USDoc get pods NAME READY STATUS RESTARTS AGE camel-quarkus-examples-file-bindy-ftp-1-d72mb 1/1 Running 0 5m15s ssh-server-deployment-5f6f685658-jtr9n 1/1 Running 0 5m28s",
"logs -f camel-quarkus-examples-file-bindy-ftp-5d48f4d85c-sjl8k"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/getting_started_with_red_hat_build_of_apache_camel_for_quarkus/camel-extensions-for-quarkus-application-examples |
Chapter 12. Configuring routing | Chapter 12. Configuring routing Routing is the process by which messages are delivered to their destinations. To accomplish this, AMQ Interconnect provides two routing mechanisms: message routing and link routing . Message routing Message routing is the default routing mechanism. You can use it to route messages on a per-message basis between clients directly (direct-routed messaging), or to and from broker queues (brokered messaging). Link routing A link route represents a private messaging path between a sender and a receiver in which the router passes the messages between end points. You can use it to connect a client to a service (such as a broker queue). 12.1. Configuring message routing Message routing is the default routing mechanism. You can use it to route messages on a per-message basis between clients directly (direct-routed messaging), or to and from broker queues (brokered messaging). With message routing, you can do the following: Understand message routing concepts Configure address semantics (route messages between clients) Configure addresses for prioritized message delivery Configure brokered messaging Understand address pattern matching 12.1.1. Understanding message routing With message routing, routing is performed on messages as producers send them to a router. When a message arrives on a router, the router routes the message and its settlement based on the message's address and routing pattern . 12.1.1.1. Message routing flow control AMQ Interconnect uses a credit-based flow control mechanism to ensure that producers can only send messages to a router if at least one consumer is available to receive them. Because AMQ Interconnect does not store messages, this credit-based flow control prevents producers from sending messages when there are no consumers present. A client wishing to send a message to the router must wait until the router has provided it with credit. Attempting to publish a message without credit available will cause the client to block. Once credit is made available, the client will unblock, and the message will be sent to the router. Note Most AMQP client libraries enable you to determine the amount of credit available to a producer. For more information, consult your client's documentation. 12.1.1.2. Addresses Addresses determine how messages flow through your router network. An address designates an endpoint in your messaging network, such as: Endpoint processes that consume data or offer a service Topics that match multiple consumers to multiple producers Entities within a messaging broker: Queues Durable Topics Exchanges When a router receives a message, it uses the message's address to determine where to send the message (either its destination or one step closer to its destination). AMQ Interconnect considers addresses to be mobile in that any user of an address may be directly connected to any router in the router network and may even move around the topology. In cases where messages are broadcast to or balanced across multiple consumers, the users of the address may be connected to multiple routers in the network. Mobile addresses may be discovered during normal router operation or configured through management settings. 12.1.1.3. Routing patterns Routing patterns define the paths that a message with a mobile address can take across a network. These routing patterns can be used for both direct routing, in which the router distributes messages between clients without a broker, and indirect routing, in which the router enables clients to exchange messages through a broker. Routing patterns fall into two categories: Anycast (Balanced and Closest) and Multicast. There is no concept of "unicast" in which there is only one consumer for an address. Anycast distribution delivers each message to one consumer whereas multicast distribution delivers each message to all consumers. Each address has one of the following routing patterns, which define the path that a message with the address can take across the messaging network: Balanced An anycast method that allows multiple consumers to use the same address. Each message is delivered to a single consumer only, and AMQ Interconnect attempts to balance the traffic load across the router network. If multiple consumers are attached to the same address, each router determines which outbound path should receive a message by considering each path's current number of unsettled deliveries. This means that more messages will be delivered along paths where deliveries are settled at higher rates. Note AMQ Interconnect neither measures nor uses message settlement time to determine which outbound path to use. In this scenario, the messages are spread across both receivers regardless of path length: Figure 12.1. Balanced Message Routing Closest An anycast method in which every message is sent along the shortest path to reach the destination, even if there are other consumers for the same address. AMQ Interconnect determines the shortest path based on the topology cost to reach each of the consumers. If there are multiple consumers with the same lowest cost, messages will be spread evenly among those consumers. In this scenario, all messages sent by Sender will be delivered to Receiver 1 : Figure 12.2. Closest Message Routing Multicast Messages are sent to all consumers attached to the address. Each consumer will receive one copy of the message. In this scenario, all messages are sent to all receivers: Figure 12.3. Multicast Message Routing 12.1.1.4. Message settlement and reliability AMQ Interconnect can deliver messages with the following degrees of reliability: At most once At least once Exactly once The level of reliability is negotiated between the producer and the router when the producer establishes a link to the router. To achieve the negotiated level of reliability, AMQ Interconnect treats all messages as either pre-settled or unsettled . Pre-settled Sometimes called fire and forget , the router settles the incoming and outgoing deliveries and propagates the settlement to the message's destination. However, it does not guarantee delivery. Unsettled AMQ Interconnect propagates the settlement between the producer and consumer. For an anycast address, the router associates the incoming delivery with the resulting outgoing delivery. Based on this association, the router propagates changes in delivery state from the consumer to the producer. For a multicast address, the router associates the incoming delivery with all outbound deliveries. The router waits for each consumer to set their delivery's final state. After all outgoing deliveries have reached their final state, the router sets a final delivery state for the original inbound delivery and passes it to the producer. The following table describes the reliability guarantees for unsettled messages sent to an anycast or multicast address: Final disposition Anycast Multicast accepted The consumer accepted the message. At least one consumer accepted the message, but no consumers rejected it. released The message did not reach its destination. The message did not reach any of the consumers. modified The message may or may not have reached its destination. The delivery is considered to be "in-doubt" and should be re-sent if "at least once" delivery is required. The message may or may not have reached any of the consumers. However, no consumers rejected or accepted it. rejected The consumer rejected the message. At least one consumer rejected the message. 12.1.2. Configuring address semantics You can route messages between clients without using a broker. In a brokerless scenario (sometimes called direct-routed messaging ), AMQ Interconnect routes messages between clients directly. To route messages between clients, you configure an address with a routing distribution pattern. When a router receives a message with this address, the message is routed to its destination or destinations based on the address's routing distribution pattern. Procedure In the /etc/qpid-dispatch/qdrouterd.conf configuration file, add an address section. prefix | pattern The address or group of addresses to which the address settings should be applied. You can specify a prefix to match an exact address or beginning segment of an address. Alternatively, you can specify a pattern to match an address using wildcards. A prefix matches either an exact address or the beginning segment within an address that is delimited by either a . or / character. For example, the prefix my_address would match the address my_address as well as my_address.1 and my_address/1 . However, it would not match my_address1 . A pattern matches an address that corresponds to a pattern. A pattern is a sequence of words delimited by either a . or / character. You can use wildcard characters to represent a word. The * character matches exactly one word, and the # character matches any sequence of zero or more words. The * and # characters are reserved as wildcards. Therefore, you should not use them in the message address. For more information about creating address patterns, see Section 12.1.5, "Address pattern matching" . Note You can convert a prefix value to a pattern by appending /# to it. For example, the prefix a/b/c is equivalent to the pattern a/b/c/# . distribution The message distribution pattern. The default is balanced , but you can specify any of the following options: balanced - Messages sent to the address will be routed to one of the receivers, and the routing network will attempt to balance the traffic load based on the rate of settlement. closest - Messages sent to the address are sent on the shortest path to reach the destination. It means that if there are multiple receivers for the same address, only the closest one will receive the message. multicast - Messages are sent to all receivers that are attached to the address in a publish/subscribe model. For more information about message distribution patterns, see Section 12.1.1.3, "Routing patterns" . For information about additional attributes, see address in the qdrouterd.conf man page. Add the same address section to any other routers that need to use the address. The address that you added to this router configuration file only controls how this router distributes messages sent to the address. If you have additional routers in your router network that should distribute messages for this address, then you must add the same address section to each of their configuration files. 12.1.3. Configuring addresses for prioritized message delivery You can set the priority level of an address to control how AMQ Interconnect processes messages sent to that address. Within the scope of a connection, AMQ Interconnect attempts to process messages based on their priority. For a connection with a large volume of messages in flight, this lowers the latency for higher-priority messages. Assigning a high priority level to an address does not guarantee that messages sent to the address will be delivered before messages sent to lower-priority addresses. However, higher-priority messages will travel more quickly through the router network than they otherwise would. Note You can also control the priority level of individual messages by setting the priority level in the message header. However, the address priority takes precedence: if you send a prioritized message to an address with a different priority level, the router will use the address priority level. Procedure In the /etc/qpid-dispatch/qdrouterd.conf configuration file, add or edit an address and assign a priority level. This example adds an address with the highest priority level. The router will attempt to deliver messages sent to this address before messages with lower priority levels. priority The priority level to assign to all messages sent to this address. The range of valid priority levels is 0-9, in which the higher the number, the higher the priority. The default is 4. Additional resources For more information about setting the priority level in a message, see the AMQP 1.0 specification . 12.1.4. Configuring brokered messaging If you require "store and forward" capabilities, you can configure AMQ Interconnect to use brokered messaging. In this scenario, clients connect to a router to send and receive messages, and the router routes the messages to or from queues on a message broker. You can configure the following: Route messages through broker queues You can route messages to a queue hosted on a single broker, or route messages to a sharded queue distributed across multiple brokers. Store and retrieve undeliverable messages on a broker queue 12.1.4.1. How AMQ Interconnect enables brokered messaging Brokered messaging enables AMQ Interconnect to store messages on a broker queue. This requires a connection to the broker, a waypoint address to represent the broker queue, and autolinks to attach to the waypoint address. An autolink is a link that is automatically created by the router to attach to a waypoint address. With autolinks, client traffic is handled on the router, not the broker. Clients attach their links to the router, and then the router uses internal autolinks to connect to the queue on the broker. Therefore, the queue will always have a single producer and a single consumer regardless of how many clients are attached to the router. Using autolinks is a form of message routing , as distinct from link routing . It is recommended to use link routing if you want to use semantics associated with a consumer, for example, the undeliverable-here=true modified delivery state. Figure 12.4. Brokered messaging In this diagram, the sender connects to the router and sends messages to my_queue. The router attaches an outgoing link to the broker, and then sends the messages to my_queue. Later, the receiver connects to the router and requests messages from my_queue. The router attaches an incoming link to the broker to receive the messages from my_queue, and then delivers them to the receiver. You can also route messages to a sharded queue , which is a single, logical queue comprised of multiple, underlying physical queues. Using queue sharding, it is possible to distribute a single queue over multiple brokers. Clients can connect to any of the brokers that hold a shard to send and receive messages. Figure 12.5. Brokered messaging with sharded queue In this diagram, a sharded queue (my_queue) is distributed across two brokers. The router is connected to the clients and to both brokers. The sender connects to the router and sends messages to my_queue. The router attaches an outgoing link to each broker, and then sends messages to each shard (by default, the routing distribution is balanced ). Later, the receiver connects to the router and requests all of the messages from my_queue. The router attaches an incoming link to one of the brokers to receive the messages from my_queue, and then delivers them to the receiver. 12.1.4.2. Routing messages through broker queues You can route messages to and from a broker queue to provide clients with access to the queue through a router. In this scenario, clients connect to a router to send and receive messages, and the router routes the messages to or from the broker queue. You can route messages to a queue hosted on a single broker, or route messages to a sharded queue distributed across multiple brokers. Procedure In the /etc/qpid-dispatch/qdrouterd.conf configuration file, add a waypoint address for the broker queue. A waypoint address identifies a queue on a broker to which you want to route messages. This example adds a waypoint address for the my_queue queue: prefix | pattern The address prefix or pattern that matches the broker queue to which you want to send messages. You can specify a prefix to match an exact address or beginning segment of an address. Alternatively, you can specify a pattern to match an address using wildcards. A prefix matches either an exact address or the beginning segment within an address that is delimited by either a . or / character. For example, the prefix my_address would match the address my_address as well as my_address.1 and my_address/1 . However, it would not match my_address1 . A pattern matches an address that corresponds to a pattern. A pattern is a sequence of words delimited by either a . or / character. You can use wildcard characters to represent a word. The * character matches exactly one word, and the # character matches any sequence of zero or more words. The * and # characters are reserved as wildcards. Therefore, you should not use them in the message address. For more information about creating address patterns, see Section 12.1.5, "Address pattern matching" . Note You can convert a prefix value to a pattern by appending /# to it. For example, the prefix a/b/c is equivalent to the pattern a/b/c/# . waypoint Set this attribute to yes so that the router handles messages sent to this address as a waypoint. Connect the router to the broker. Add an outgoing connection to the broker if one does not exist. If the queue is sharded across multiple brokers, you must add a connection for each broker. For more information, see Section 8.3, "Connecting to external AMQP containers" . Note If the connection to the broker fails, AMQ Interconnect automatically attempts to reestablish the connection and reroute message deliveries to any available alternate destinations. However, some deliveries could be returned to the sender with a RELEASED or MODIFIED disposition. Therefore, you should ensure that your clients can handle these deliveries appropriately (generally by resending them). If you want to send messages to the broker queue, add an outgoing autolink to the broker queue. If the queue is sharded across multiple brokers, you must add an outgoing autolink for each broker. This example configures an outgoing auto link to send messages to a broker queue: address The address of the broker queue. When the autolink is created, it will be attached to this address. externalAddress An optional alternate address for the broker queue. You use an external address if the broker queue should have a different address than that which the sender uses. In this scenario, senders send messages to the address address, and then the router routes them to the broker queue represented by the externalAddress address. connection | containerID How the router should connect to the broker. You can specify either an outgoing connection ( connection ) or the container ID of the broker ( containerID ). direction Set this attribute to out to specify that this autolink can send messages from the router to the broker. For information about additional attributes, see autoLink in the qdrouterd.conf man page. If you want to receive messages from the broker queue, add an incoming autolink from the broker queue: If the queue is sharded across multiple brokers, you must add an outgoing autolink for each broker. This example configures an incoming auto link to receive messages from a broker queue: address The address of the broker queue. When the autolink is created, it will be attached to this address. externalAddress An optional alternate address for the broker queue. You use an external address if the broker queue should have a different address than that which the receiver uses. In this scenario, receivers receive messages from the address address, and the router retrieves them from the broker queue represented by the externalAddress address. connection | containerID How the router should connect to the broker. You can specify either an outgoing connection ( connection ) or the container ID of the broker ( containerID ). direction Set this attribute to in to specify that this autolink can receive messages from the broker to the router. For information about additional attributes, see autoLink in the qdrouterd.conf man page. 12.1.4.3. Handling undeliverable messages You handle undeliverable messages for an address by configuring autolinks that point to fallback destinations . A fallback destination (such as a queue on a broker) stores messages that are not directly routable to any consumers. During normal message delivery, AMQ Interconnect delivers messages to the consumers that are attached to the router network. However, if no consumers are reachable, the messages are diverted to any fallback destinations that were configured for the address (if the autolinks that point to the fallback destinations are active). When a consumer reconnects and becomes reachable again, it receives the messages stored at the fallback destination. Note AMQ Interconnect preserves the original delivery order for messages stored at a fallback destination. However, when a consumer reconnects, any new messages produced while the queue is draining will be interleaved with the messages stored at the fallback destination. Prerequisites The router is connected to a broker. For more information, see Section 8.3, "Connecting to external AMQP containers" . Procedure This procedure enables fallback for an address and configures autolinks to connect to the broker queue that provides the fallback destination for the address. In the /etc/qpid-dispatch/qdrouterd.conf configuration file, enable fallback destinations for the address. Add an outgoing autolink to a queue on the broker. For the address for which you enabled fallback, if messages are not routable to any consumers, the router will use this autolink to send the messages to a queue on the broker. If you want the router to send queued messages to attached consumers as soon as they connect to the router network, add an incoming autolink. As soon as a consumer attaches to the router, it will receive the messages stored in the broker queue, along with any new messages sent by the producer. The original delivery order of the queued messages is preserved; however, the queued messages will be interleaved with the new messages. If you do not add the incoming autolink, the messages will be stored on the broker, but will not be sent to consumers when they attach to the router. 12.1.5. Address pattern matching In some router configuration scenarios, you might need to use pattern matching to match a range of addresses rather than a single, literal address. Address patterns match any address that corresponds to the pattern. An address pattern is a sequence of tokens (typically words) that are delimited by either . or / characters. They also can contain special wildcard characters that represent words: * represents exactly one word # represents zero or more words Example 12.1. Address pattern This address contains two tokens, separated by the / delimiter: my/address Example 12.2. Address pattern with wildcard This address contains three tokens. The * is a wildcard, representing any single word that might be between my and address : my/*/address The following table shows some address patterns and examples of the addresses that would match them: This pattern... Matches... But not... news/* news/europe news/usa news news/usa/sports news/# news news/europe news/usa/sports europe usa news/europe/# news/europe news/europe/sports news/europe/politics/fr news/usa europe news/*/sports news/europe/sports news/usa/sports news news/europe/fr/sports 12.2. Creating link routes A link route represents a private messaging path between a sender and a receiver in which the router passes the messages between end points. You can use it to connect a client to a service (such as a broker queue). 12.2.1. Understanding link routing Link routing provides an alternative strategy for brokered messaging. A link route represents a private messaging path between a sender and a receiver in which the router passes the messages between end points. You can think of a link route as a "virtual connection" or "tunnel" that travels from a sender, through the router network, to a receiver. With link routing, routing is performed on link-attach frames, which are chained together to form a virtual messaging path that directly connects a sender and receiver. Once a link route is established, the transfer of message deliveries, flow frames, and dispositions is performed across the link route. 12.2.1.1. Link routing flow control Unlike message routing, with link routing, the sender and receiver handle flow control directly: the receiver grants link credits, which is the number of messages it is able to receive. The router sends them directly to the sender, and then the sender sends the messages based on the credits that the receiver granted. 12.2.1.2. Link route addresses A link route address represents a broker queue, topic, or other service. When a client attaches a link route address to a router, the router propagates a link attachment to the broker resource identified by the address. Using link route addresses, the router network does not participate in aggregated message distribution. The router simply passes message delivery and settlement between the two end points. 12.2.1.3. Routing patterns for link routing Routing patterns are not used with link routing, because there is a direct link between the sender and receiver. The router only makes a routing decision when it receives the initial link-attach request frame. Once the link is established, the router passes the messages along the link in a balanced distribution. 12.2.2. Creating a link route Link routes establish a link between a sender and a receiver that travels through a router. You can configure inward and outward link routes to enable the router to receive link-attaches from clients and to send them to a particular destination. With link routing, client traffic is handled on the broker, not the router. Clients have a direct link through the router to a broker's queue. Therefore, each client is a separate producer or consumer. Note If the connection to the broker fails, the routed links are detached, and the router will attempt to reconnect to the broker (or its backup). Once the connection is reestablished, the link route to the broker will become reachable again. From the client's perspective, the client will see the detached links (that is, the senders or receivers), but not the failed connection. Therefore, if you want the client to reattach dropped links in the event of a broker connection failure, you must configure this functionality on the client. Alternatively, you can use message routing with autolinks instead of link routing. For more information, see Section 12.1.4.2, "Routing messages through broker queues" . Procedure Add an outgoing connection to the broker if one does not exist. If the queue is sharded across multiple brokers, you must add a connection for each broker. For more information, see Section 8.3, "Connecting to external AMQP containers" . If you want clients to send local transactions to the broker, create a link route for the transaction coordinator: 1 The USDcoordinator prefix designates this link route as a transaction coordinator. When the client opens a transacted session, the requests to start and end the transaction are propagated along this link route to the broker. AMQ Interconnect does not support routing transactions to multiple brokers. If you have multiple brokers in your environment, choose a single broker and route all transactions to it. If you want clients to send messages on this link route, create an incoming link route: prefix | pattern The address prefix or pattern that matches the broker queue that should be the destination for routed link-attaches. All messages that match this prefix or pattern will be distributed along the link route. You can specify a prefix to match an exact address or beginning segment of an address. Alternatively, you can specify a pattern to match an address using wildcards. A prefix matches either an exact address or the beginning segment within an address that is delimited by either a . or / character. For example, the prefix my_address would match the address my_address as well as my_address.1 and my_address/1 . However, it would not match my_address1 . A pattern matches an address that corresponds to a pattern. A pattern is a sequence of words delimited by either a . or / character. You can use wildcard characters to represent a word. The * character matches exactly one word, and the # character matches any sequence of zero or more words. The * and # characters are reserved as wildcards. Therefore, you should not use them in the message address. For more information about creating address patterns, see Section 12.1.5, "Address pattern matching" . Note You can convert a prefix value to a pattern by appending /# to it. For example, the prefix a/b/c is equivalent to the pattern a/b/c/# . connection | containerID How the router should connect to the broker. You can specify either an outgoing connection ( connection ) or the container ID of the broker ( containerID ). If multiple brokers are connected to the router through this connection, requests for addresses matching the link route's prefix or pattern are balanced across the brokers. Alternatively, if you want to specify a particular broker, use containerID and add the broker's container ID. direction Set this attribute to in to specify that clients can send messages into the router network on this link route. For information about additional attributes, see linkRoute in the qdrouterd.conf man page. If you want clients to receive messages on this link route, create an outgoing link route: prefix | pattern The address prefix or pattern that matches the broker queue from which you want to receive routed link-attaches. All messages that match this prefix or pattern will be distributed along the link route. You can specify a prefix to match an exact address or beginning segment of an address. Alternatively, you can specify a pattern to match an address using wildcards. A prefix matches either an exact address or the beginning segment within an address that is delimited by either a . or / character. For example, the prefix my_address would match the address my_address as well as my_address.1 and my_address/1 . However, it would not match my_address1 . A pattern matches an address that corresponds to a pattern. A pattern is a sequence of words delimited by either a . or / character. You can use wildcard characters to represent a word. The * character matches exactly one word, and the # character matches any sequence of zero or more words. The * and # characters are reserved as wildcards. Therefore, you should not use them in the message address. For more information about creating address patterns, see Section 12.1.5, "Address pattern matching" . Note You can convert a prefix value to a pattern by appending /# to it. For example, the prefix a/b/c is equivalent to the pattern a/b/c/# . connection | containerID How the router should connect to the broker. You can specify either an outgoing connection ( connection ) or the container ID of the broker ( containerID ). If multiple brokers are connected to the router through this connection, requests for addresses matching the link route's prefix or pattern are balanced across the brokers. Alternatively, if you want to specify a particular broker, use containerID and add the broker's container ID. direction Set this attribute to out to specify that this link route is for receivers. For information about additional attributes, see linkRoute in the qdrouterd.conf man page. 12.2.3. Link route example: Connecting clients and brokers on different networks This example shows how a link route can connect a client to a message broker that is on a different private network. Figure 12.6. Router network with isolated clients The client is constrained by firewall policy to connect to the router in its own network ( R3 ). However, it can use a link route to access queues, topics, and any other AMQP services that are provided on message brokers B1 and B2 - even though they are on different networks. In this example, the client needs to receive messages from b2.event-queue , which is hosted on broker B2 in Private Network 1 . A link route connects the client and broker even though neither of them is aware that there is a router network between them. Router configuration To enable the client to receive messages from b2.event-queue on broker B2 , router R2 must be able to do the following: Connect to broker B2 Route links to and from broker B2 Advertise itself to the router network as a valid destination for links that have a b2.event-queue address The relevant part of the configuration file for router R2 shows the following: 1 The outgoing connection from the router to broker B2 . The route-container role enables the router to connect to an external AMQP container (in this case, a broker). 2 The incoming link route for receiving links from client senders. Any sender with a target whose address begins with b2 will be routed to broker B2 using the broker connector. 3 The outgoing link route for sending links to client receivers. Any receivers whose source address begins with b2 will be routed to broker B2 using the broker connector. This configuration enables router R2 to advertise itself as a valid destination for targets and sources starting with b2 . It also enables the router to connect to broker B2 , and to route links to and from queues starting with the b2 prefix. Note While not required, routers R1 and R3 should also have the same configuration. How the client receives messages By using the configured link route, the client can receive messages from broker B2 even though they are on different networks. Router R2 establishes a connection to broker B2 . Once the connection is open, R2 tells the other routers ( R1 and R3 ) that it is a valid destination for link routes to the b2 prefix. This means that sender and receiver links attached to R1 or R3 will be routed along the shortest path to R2 , which then routes them to broker B2 . To receive messages from the b2.event-queue on broker B2 , the client attaches a receiver link with a source address of b2.event-queue to its local router, R3 . Because the address matches the b2 prefix, R3 routes the link to R1 , which is the hop in the route to its destination. R1 routes the link to R2 , which routes it to broker B2 . The client now has a receiver established, and it can begin receiving messages. Note If broker B2 is unavailable for any reason, router R2 will not advertise itself as a destination for b2 addresses. In this case, routers R1 and R3 will reject link attaches that should be routed to broker B2 with an error message indicating that there is no route available to the destination. | [
"address { prefix: my_address distribution: multicast }",
"address { prefix: my-high-priority-address priority: 9 }",
"address { prefix: my_queue waypoint: yes }",
"autoLink { address: my_queue connection: my_broker direction: out }",
"autoLink { address: my_queue connection: my_broker direction: in }",
"address { prefix: my_address enableFallback: yes }",
"autoLink { address: my_address.2 direction: out connection: my_broker fallback: yes }",
"autoLink { address: my_address.2 direction: in connection: my_broker fallback: yes }",
"linkRoute { prefix: USDcoordinator 1 connection: my_broker direction: in }",
"linkRoute { prefix: my_queue connection: my_broker direction: in }",
"linkRoute { prefix: my_queue connection: my_broker direction: out }",
"connector { 1 name: broker role: route-container host: 192.0.2.1 port: 61617 saslMechanisms: ANONYMOUS } linkRoute { 2 prefix: b2 direction: in connection: broker } linkRoute { 3 prefix: b2 direction: out connection: broker }"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_interconnect/configuring-routing-router-rhel |
Chapter 6. Summarizing cluster specifications | Chapter 6. Summarizing cluster specifications 6.1. Summarizing cluster specifications by using a cluster version object You can obtain a summary of OpenShift Container Platform cluster specifications by querying the clusterversion resource. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Query cluster version, availability, uptime, and general status: USD oc get clusterversion Example output NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8 Obtain a detailed summary of cluster specifications, update availability, and update history: USD oc describe clusterversion Example output Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion # ... Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8 # ... | [
"oc get clusterversion",
"NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.13.8 True False 8h Cluster version is 4.13.8",
"oc describe clusterversion",
"Name: version Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterVersion Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce URL: https://access.redhat.com/errata/RHSA-2023:4456 Version: 4.13.8 History: Completion Time: 2023-08-17T13:20:21Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a956488d295fe5a59c8663a4d9992b9b5d0950f510a7387dbbfb8d20fc5970ce Started Time: 2023-08-17T12:59:45Z State: Completed Verified: false Version: 4.13.8"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/support/summarizing-cluster-specifications |
12.7. Detaching Virtual Machines from a Virtual Machine Pool | 12.7. Detaching Virtual Machines from a Virtual Machine Pool You can detach virtual machines from a virtual machine pool. Detaching a virtual machine removes it from the pool to become an independent virtual machine. Detaching Virtual Machines from a Virtual Machine Pool Click Compute Pools . Click the pool's name to open the details view. Click the Virtual Machines tab to list the virtual machines in the pool. Ensure the virtual machine has a status of Down ; you cannot detach a running virtual machine. Select one or more virtual machines and click Detach . Click OK . Note The virtual machine still exists in the environment and can be viewed and accessed from Compute Virtual Machines . Note that the icon changes to denote that the detached virtual machine is an independent virtual machine. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/detaching_virtual_machines_from_a_vm_pool |
Chapter 25. Configuring Routes | Chapter 25. Configuring Routes 25.1. Route configuration 25.1.1. Creating an HTTP-based route A route allows you to host your application at a public URL. It can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift application as an example. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as an administrator. You have a web application that exposes a port and a TCP endpoint listening for traffic on the port. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create an unsecured route to the hello-openshift application by running the following command: USD oc expose svc hello-openshift Verification To verify that the route resource that you created, run the following command: USD oc get routes -o yaml <name of resource> 1 1 In this example, the route is named hello-openshift . Sample YAML definition of the created unsecured route: apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: www.example.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift 1 The host field is an alias DNS record that points to the service. This field can be any valid DNS name, such as www.example.com . The DNS name must follow DNS952 subdomain conventions. If not specified, a route name is automatically generated. 2 The targetPort field is the target port on pods that is selected by the service that this route points to. Note To display your default ingress domain, run the following command: USD oc get ingresses.config/cluster -o jsonpath={.spec.domain} 25.1.2. Creating a route for Ingress Controller sharding A route allows you to host your application at a URL. In this case, the hostname is not set and the route uses a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. For situations where a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs. The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift application as an example. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as a project administrator. You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port. You have configured the Ingress Controller for sharding. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create a route definition called hello-openshift-route.yaml : YAML definition of the created route for sharding: apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift 1 Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value type: sharded . 2 The route will be exposed using the value of the subdomain field. When you specify the subdomain field, you must leave the hostname unset. If you specify both the host and subdomain fields, then the route will use the value of the host field, and ignore the subdomain field. Use hello-openshift-route.yaml to create a route to the hello-openshift application by running the following command: USD oc -n hello-openshift create -f hello-openshift-route.yaml Verification Get the status of the route with the following command: USD oc -n hello-openshift get routes/hello-openshift-edge -o yaml The resulting Route resource should look similar to the following: Example output apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3 1 The hostname the Ingress Controller, or router, uses to expose the route. The value of the host field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is <apps-sharded.basedomain.example.net> . 2 The hostname of the Ingress Controller. 3 The name of the Ingress Controller. In this example, the Ingress Controller has the name sharded . 25.1.3. Configuring route timeouts You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end. Prerequisites You need a deployed Ingress Controller on a running cluster. Procedure Using the oc annotate command, add the timeout to the route: USD oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1 1 Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d). The following example sets a timeout of two seconds on a route named myroute : USD oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s 25.1.4. HTTP Strict Transport Security HTTP Strict Transport Security (HSTS) policy is a security enhancement, which signals to the browser client that only HTTPS traffic is allowed on the route host. HSTS also optimizes web traffic by signaling HTTPS transport is required, without using HTTP redirects. HSTS is useful for speeding up interactions with websites. When HSTS policy is enforced, HSTS adds a Strict Transport Security header to HTTP and HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy value in a route to redirect HTTP to HTTPS. When HSTS is enforced, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect. Cluster administrators can configure HSTS to do the following: Enable HSTS per-route Disable HSTS per-route Enforce HSTS per-domain, for a set of domains, or use namespace labels in combination with domains Important HSTS works only with secure routes, either edge-terminated or re-encrypt. The configuration is ineffective on HTTP or passthrough routes. 25.1.4.1. Enabling HTTP Strict Transport Security per-route HTTP strict transport security (HSTS) is implemented in the HAProxy template and applied to edge and re-encrypt routes that have the haproxy.router.openshift.io/hsts_header annotation. Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure To enable HSTS on a route, add the haproxy.router.openshift.io/hsts_header value to the edge-terminated or re-encrypt route. You can use the oc annotate tool to do this by running the following command: USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000;\ 1 includeSubDomains;preload" 1 In this example, the maximum age is set to 31536000 ms, which is approximately eight and a half hours. Note In this example, the equal sign ( = ) is in quotes. This is required to properly execute the annotate command. Example route configured with an annotation apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 ... spec: host: def.abc.com tls: termination: "reencrypt" ... wildcardPolicy: "Subdomain" 1 Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. If set to 0 , it negates the policy. 2 Optional. When included, includeSubDomains tells the client that all subdomains of the host must have the same HSTS policy as the host. 3 Optional. When max-age is greater than 0, you can add preload in haproxy.router.openshift.io/hsts_header to allow external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, even before they have interacted with the site. Without preload set, browsers must have interacted with the site over HTTPS, at least once, to get the header. 25.1.4.2. Disabling HTTP Strict Transport Security per-route To disable HTTP strict transport security (HSTS) per-route, you can set the max-age value in the route annotation to 0 . Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure To disable HSTS, set the max-age value in the route annotation to 0 , by entering the following command: USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Tip You can alternatively apply the following YAML to create the config map: Example of disabling HSTS per-route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0 To disable HSTS for every route in a namespace, enter the following command: USD oc annotate route --all -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Verification To query the annotation for all routes, enter the following command: USD oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}' Example output Name: routename HSTS: max-age=0 25.1.4.3. Enforcing HTTP Strict Transport Security per-domain To enforce HTTP Strict Transport Security (HSTS) per-domain for secure routes, add a requiredHSTSPolicies record to the Ingress spec to capture the configuration of the HSTS policy. If you configure a requiredHSTSPolicy to enforce HSTS, then any newly created route must be configured with a compliant HSTS policy annotation. Note To handle upgraded clusters with non-compliant HSTS routes, you can update the manifests at the source and apply the updates. Note You cannot use oc expose route or oc create route commands to add a route in a domain that enforces HSTS, because the API for these commands does not accept annotations. Important HSTS cannot be applied to insecure, or non-TLS routes, even if HSTS is requested for all routes globally. Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the oc CLI. Procedure Edit the Ingress config file: USD oc edit ingresses.config.openshift.io/cluster Example HSTS policy apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: 7 - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains 1 Required. requiredHSTSPolicies are validated in order, and the first matching domainPatterns applies. 2 7 Required. You must specify at least one domainPatterns hostname. Any number of domains can be listed. You can include multiple sections of enforcing options for different domainPatterns . 3 Optional. If you include namespaceSelector , it must match the labels of the project where the routes reside, to enforce the set HSTS policy on the routes. Routes that only match the namespaceSelector and not the domainPatterns are not validated. 4 Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. This policy setting allows for a smallest and largest max-age to be enforced. The largestMaxAge value must be between 0 and 2147483647 . It can be left unspecified, which means no upper limit is enforced. The smallestMaxAge value must be between 0 and 2147483647 . Enter 0 to disable HSTS for troubleshooting, otherwise enter 1 if you never want HSTS to be disabled. It can be left unspecified, which means no lower limit is enforced. 5 Optional. Including preload in haproxy.router.openshift.io/hsts_header allows external services to include this site in their HSTS preload lists. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, before they have interacted with the site. Without preload set, browsers need to interact at least once with the site to get the header. preload can be set with one of the following: RequirePreload : preload is required by the RequiredHSTSPolicy . RequireNoPreload : preload is forbidden by the RequiredHSTSPolicy . NoOpinion : preload does not matter to the RequiredHSTSPolicy . 6 Optional. includeSubDomainsPolicy can be set with one of the following: RequireIncludeSubDomains : includeSubDomains is required by the RequiredHSTSPolicy . RequireNoIncludeSubDomains : includeSubDomains is forbidden by the RequiredHSTSPolicy . NoOpinion : includeSubDomains does not matter to the RequiredHSTSPolicy . You can apply HSTS to all routes in the cluster or in a particular namespace by entering the oc annotate command . To apply HSTS to all routes in the cluster, enter the oc annotate command . For example: USD oc annotate route --all --all-namespaces --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000" To apply HSTS to all routes in a particular namespace, enter the oc annotate command . For example: USD oc annotate route --all -n my-namespace --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000" Verification You can review the HSTS policy you configured. For example: To review the maxAge set for required HSTS policies, enter the following command: USD oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{"\n"}{end}' To review the HSTS annotations on all routes, enter the following command: USD oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}' Example output Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains 25.1.5. Throughput issue troubleshooting methods Sometimes applications deployed by using OpenShift Container Platform can cause network throughput issues, such as unusually high latency between specific services. If pod logs do not reveal any cause of the problem, use the following methods to analyze performance issues: Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node. For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to and from a pod. Latency can occur in OpenShift Container Platform if a node interface is overloaded with traffic from other pods, storage devices, or the data plane. USD tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1 1 podip is the IP address for the pod. Run the oc get pod <pod_name> -o wide command to get the IP address of a pod. The tcpdump command generates a file at /tmp/dump.pcap containing all traffic between these two pods. You can run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes (eliminating the SDN from the equation) with: USD tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789 Use a bandwidth measuring tool, such as iperf , to measure streaming throughput and UDP throughput. Locate any bottlenecks by running the tool from the pods first, and then running it from the nodes. For information on installing and using iperf , see this Red Hat Solution . In some cases, the cluster may mark the node with the router pod as unhealthy due to latency issues. Use worker latency profiles to adjust the frequency that the cluster waits for a status update from the node before taking action. If your cluster has designated lower-latency and higher-latency nodes, configure the spec.nodePlacement field in the Ingress Controller to control the placement of the router pod. Additional resources Latency spikes or temporary reduction in throughput to remote workers Ingress Controller configuration parameters 25.1.6. Using cookies to keep route statefulness OpenShift Container Platform provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear. OpenShift Container Platform can use cookies to configure session persistence. The Ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the request in the session. The cookie tells the Ingress Controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same pod. Note Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the backend. If backends change, the traffic can be directed to the wrong server, making it less sticky. If you are using a load balancer, which hides source IP, the same number is set for all connections and traffic is sent to the same pod. 25.1.6.1. Annotating a route with a cookie You can set a cookie name to overwrite the default, auto-generated one for the route. This allows the application receiving route traffic to know the cookie name. By deleting the cookie it can force the request to re-choose an endpoint. So, if a server was overloaded it tries to remove the requests from the client and redistribute them. Procedure Annotate the route with the specified cookie name: USD oc annotate route <route_name> router.openshift.io/cookie_name="<cookie_name>" where: <route_name> Specifies the name of the route. <cookie_name> Specifies the name for the cookie. For example, to annotate the route my_route with the cookie name my_cookie : USD oc annotate route my_route router.openshift.io/cookie_name="my_cookie" Capture the route hostname in a variable: USD ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}') where: <route_name> Specifies the name of the route. Save the cookie, and then access the route: USD curl USDROUTE_NAME -k -c /tmp/cookie_jar Use the cookie saved by the command when connecting to the route: USD curl USDROUTE_NAME -k -b /tmp/cookie_jar 25.1.7. Path-based routes Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least. The following table shows example routes and their accessibility: Table 25.1. Route availability Route When Compared to Accessible www.example.com/test www.example.com/test Yes www.example.com No www.example.com/test and www.example.com www.example.com/test Yes www.example.com Yes www.example.com www.example.com/text Yes (Matched by the host, not the route) www.example.com Yes An unsecured route with a path apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: "/test" 1 to: kind: Service name: service-name 1 The path is the only added attribute for a path-based route. Note Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request. 25.1.8. Route-specific annotations The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route. Important To create a whitelist with multiple source IPs or subnets, use a space-delimited list. Any other delimiter type causes the list to be ignored without a warning or error message. Table 25.2. Route annotations Variable Description Environment variable used as default haproxy.router.openshift.io/balance Sets the load-balancing algorithm. Available options are random , source , roundrobin , and leastconn . The default value is source for TLS passthrough routes. For all other routes, the default is random . ROUTER_TCP_BALANCE_SCHEME for passthrough routes. Otherwise, use ROUTER_LOAD_BALANCE_ALGORITHM . haproxy.router.openshift.io/disable_cookies Disables the use of cookies to track related connections. If set to 'true' or 'TRUE' , the balance algorithm is used to choose which back-end serves connections for each incoming HTTP request. router.openshift.io/cookie_name Specifies an optional cookie to use for this route. The name must consist of any combination of upper and lower case letters, digits, "_", and "-". The default is the hashed internal key name for the route. haproxy.router.openshift.io/pod-concurrent-connections Sets the maximum number of connections that are allowed to a backing pod from a router. Note: If there are multiple pods, each can have this many connections. If you have multiple routers, there is no coordination among them, each may connect this many times. If not set, or set to 0, there is no limit. haproxy.router.openshift.io/rate-limit-connections Setting 'true' or 'TRUE' enables rate limiting functionality which is implemented through stick-tables on the specific backend per route. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp Limits the number of concurrent TCP connections made through the same source IP address. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-http Limits the rate at which a client with the same source IP address can make HTTP requests. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-tcp Limits the rate at which a client with the same source IP address can make TCP connections. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/timeout Sets a server-side timeout for the route. (TimeUnits) ROUTER_DEFAULT_SERVER_TIMEOUT haproxy.router.openshift.io/timeout-tunnel This timeout applies to a tunnel connection, for example, WebSocket over cleartext, edge, reencrypt, or passthrough routes. With cleartext, edge, or reencrypt route types, this annotation is applied as a timeout tunnel with the existing timeout value. For the passthrough route types, the annotation takes precedence over any existing timeout value set. ROUTER_DEFAULT_TUNNEL_TIMEOUT ingresses.config/cluster ingress.operator.openshift.io/hard-stop-after You can set either an IngressController or the ingress config . This annotation redeploys the router and configures the HA proxy to emit the haproxy hard-stop-after global option, which defines the maximum time allowed to perform a clean soft-stop. ROUTER_HARD_STOP_AFTER router.openshift.io/haproxy.health.check.interval Sets the interval for the back-end health checks. (TimeUnits) ROUTER_BACKEND_CHECK_INTERVAL haproxy.router.openshift.io/ip_whitelist Sets an allowlist for the route. The allowlist is a space-separated list of IP addresses and CIDR ranges for the approved source addresses. Requests from IP addresses that are not in the allowlist are dropped. The maximum number of IP addresses and CIDR ranges directly visible in the haproxy.config file is 61. [ 1 ] haproxy.router.openshift.io/hsts_header Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route. haproxy.router.openshift.io/rewrite-target Sets the rewrite path of the request on the backend. router.openshift.io/cookie-same-site Sets a value to restrict cookies. The values are: Lax : the browser does not send cookies on cross-site requests, but does send cookies when users navigate to the origin site from an external site. This is the default browser behavior when the SameSite value is not specified. Strict : the browser sends cookies only for same-site requests. None : the browser sends cookies for both cross-site and same-site requests. This value is applicable to re-encrypt and edge routes only. For more information, see the SameSite cookies documentation . haproxy.router.openshift.io/set-forwarded-headers Sets the policy for handling the Forwarded and X-Forwarded-For HTTP headers per route. The values are: append : appends the header, preserving any existing header. This is the default value. replace : sets the header, removing any existing header. never : never sets the header, but preserves any existing header. if-none : sets the header if it is not already set. ROUTER_SET_FORWARDED_HEADERS If the number of IP addresses and CIDR ranges in an allowlist exceeds 61, they are written into a separate file that is then referenced from haproxy.config . This file is stored in the var/lib/haproxy/router/whitelists folder. Note To ensure that the addresses are written to the allowlist, check that the full list of CIDR ranges are listed in the Ingress Controller configuration file. The etcd object size limit restricts how large a route annotation can be. Because of this, it creates a threshold for the maximum number of IP addresses and CIDR ranges that you can include in an allowlist. Note Environment variables cannot be edited. Router timeout variables TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days). The regular expression is: [1-9][0-9]*( us \| ms \| s \| m \| h \| d ). Variable Default Description ROUTER_BACKEND_CHECK_INTERVAL 5000ms Length of time between subsequent liveness checks on back ends. ROUTER_CLIENT_FIN_TIMEOUT 1s Controls the TCP FIN timeout period for the client connecting to the route. If the FIN sent to close the connection does not answer within the given time, HAProxy closes the connection. This is harmless if set to a low value and uses fewer resources on the router. ROUTER_DEFAULT_CLIENT_TIMEOUT 30s Length of time that a client has to acknowledge or send data. ROUTER_DEFAULT_CONNECT_TIMEOUT 5s The maximum connection time. ROUTER_DEFAULT_SERVER_FIN_TIMEOUT 1s Controls the TCP FIN timeout from the router to the pod backing the route. ROUTER_DEFAULT_SERVER_TIMEOUT 30s Length of time that a server has to acknowledge or send data. ROUTER_DEFAULT_TUNNEL_TIMEOUT 1h Length of time for TCP or WebSocket connections to remain open. This timeout period resets whenever HAProxy reloads. ROUTER_SLOWLORIS_HTTP_KEEPALIVE 300s Set the maximum time to wait for a new HTTP request to appear. If this is set too low, it can cause problems with browsers and applications not expecting a small keepalive value. Some effective timeout values can be the sum of certain variables, rather than the specific expected timeout. For example, ROUTER_SLOWLORIS_HTTP_KEEPALIVE adjusts timeout http-keep-alive . It is set to 300s by default, but HAProxy also waits on tcp-request inspect-delay , which is set to 5s . In this case, the overall timeout would be 300s plus 5s . ROUTER_SLOWLORIS_TIMEOUT 10s Length of time the transmission of an HTTP request can take. RELOAD_INTERVAL 5s Allows the minimum frequency for the router to reload and accept new changes. ROUTER_METRICS_HAPROXY_TIMEOUT 5s Timeout for the gathering of HAProxy metrics. A route setting custom timeout apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1 ... 1 Specifies the new timeout with HAProxy supported units ( us , ms , s , m , h , d ). If the unit is not provided, ms is the default. Note Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route. A route that allows only one specific IP address metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 A route that allows several IP addresses metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12 A route that allows an IP address CIDR network metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24 A route that allows both IP an address and IP address CIDR networks metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8 A route specifying a rewrite target apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1 ... 1 Sets / as rewrite path of the request on the backend. Setting the haproxy.router.openshift.io/rewrite-target annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path is replaced with the rewrite target specified in the annotation. The following table provides examples of the path rewriting behavior for various combinations of spec.path , request path, and rewrite target. Table 25.3. rewrite-target examples: Route.spec.path Request path Rewrite target Forwarded request path /foo /foo / / /foo /foo/ / / /foo /foo/bar / /bar /foo /foo/bar/ / /bar/ /foo /foo /bar /bar /foo /foo/ /bar /bar/ /foo /foo/bar /baz /baz/bar /foo /foo/bar/ /baz /baz/bar/ /foo/ /foo / N/A (request path does not match route path) /foo/ /foo/ / / /foo/ /foo/bar / /bar 25.1.9. Configuring the route admission policy Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. Warning Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces. Prerequisites Cluster administrator privileges. Procedure Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge Sample Ingress Controller configuration spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ... Tip You can alternatively apply the following YAML to configure the route admission policy: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed 25.1.10. Creating a route through an Ingress object Some ecosystem components have an integration with Ingress resources but not with route resources. To cover this case, OpenShift Container Platform automatically creates managed route objects when an Ingress object is created. These route objects are deleted when the corresponding Ingress objects are deleted. Procedure Define an Ingress object in the OpenShift Container Platform console or by entering the oc create command: YAML Definition of an Ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" 1 route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2 spec: rules: - host: www.example.com 3 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate 1 The route.openshift.io/termination annotation can be used to configure the spec.tls.termination field of the Route as Ingress has no field for this. The accepted values are edge , passthrough and reencrypt . All other values are silently ignored. When the annotation value is unset, edge is the default route. The TLS certificate details must be defined in the template file to implement the default edge route. 3 When working with an Ingress object, you must specify an explicit hostname, unlike when working with routes. You can use the <host_name>.<cluster_ingress_domain> syntax, for example apps.openshiftdemos.com , to take advantage of the *.<cluster_ingress_domain> wildcard DNS record and serving certificate for the cluster. Otherwise, you must ensure that there is a DNS record for the chosen hostname. If you specify the passthrough value in the route.openshift.io/termination annotation, set path to '' and pathType to ImplementationSpecific in the spec: spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443 USD oc apply -f ingress.yaml 2 The route.openshift.io/destination-ca-certificate-secret can be used on an Ingress object to define a route with a custom destination certificate (CA). The annotation references a kubernetes secret, secret-ca-cert that will be inserted into the generated route. To specify a route object with a destination CA from an ingress object, you must create a kubernetes.io/tls or Opaque type secret with a certificate in PEM-encoded format in the data.tls.crt specifier of the secret. List your routes: USD oc get routes The result includes an autogenerated route whose name starts with frontend- : NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None If you inspect this route, it looks this: YAML Definition of an autogenerated route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- to: kind: Service name: frontend 25.1.11. Creating a route using the default certificate through an Ingress object If you create an Ingress object without specifying any TLS configuration, OpenShift Container Platform generates an insecure route. To create an Ingress object that generates a secure, edge-terminated route using the default ingress certificate, you can specify an empty TLS configuration as follows. Prerequisites You have a service that you want to expose. You have access to the OpenShift CLI ( oc ). Procedure Create a YAML file for the Ingress object. In this example, the file is called example-ingress.yaml : YAML definition of an Ingress object apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend ... spec: rules: ... tls: - {} 1 1 Use this exact syntax to specify TLS without specifying a custom certificate. Create the Ingress object by running the following command: USD oc create -f example-ingress.yaml Verification Verify that OpenShift Container Platform has created the expected route for the Ingress object by running the following command: USD oc get routes -o yaml Example output apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 ... spec: ... tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3 ... 1 The name of the route includes the name of the Ingress object followed by a random suffix. 2 In order to use the default certificate, the route should not specify spec.certificate . 3 The route should specify the edge termination policy. 25.1.12. Creating a route using the destination CA certificate in the Ingress annotation The route.openshift.io/destination-ca-certificate-secret annotation can be used on an Ingress object to define a route with a custom destination CA certificate. Prerequisites You may have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Procedure Create a secret for the destination CA certificate by entering the following command: USD oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path> For example: USD oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt Example output secret/dest-ca-cert created Add the route.openshift.io/destination-ca-certificate-secret to the Ingress annotations: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1 ... 1 The annotation references a kubernetes secret. The secret referenced in this annotation will be inserted into the generated route. Example output apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: ... tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- ... 25.1.13. Configuring the OpenShift Container Platform Ingress Controller for dual-stack networking If your OpenShift Container Platform cluster is configured for IPv4 and IPv6 dual-stack networking, your cluster is externally reachable by OpenShift Container Platform routes. The Ingress Controller automatically serves services that have both IPv4 and IPv6 endpoints, but you can configure the Ingress Controller for single-stack or dual-stack services. Prerequisites You deployed an OpenShift Container Platform cluster on bare metal. You installed the OpenShift CLI ( oc ). Procedure To have the Ingress Controller serve traffic over IPv4/IPv6 to a workload, you can create a service YAML file or modify an existing service YAML file by setting the ipFamilies and ipFamilyPolicy fields. For example: Sample service YAML file apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: "<resource_version_number>" selfLink: "/api/v1/namespaces/<namespace_name>/services/<service_name>" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {} 1 In a dual-stack instance, there are two different clusterIPs provided. 2 For a single-stack instance, enter IPv4 or IPv6 . For a dual-stack instance, enter both IPv4 and IPv6 . 3 For a single-stack instance, enter SingleStack . For a dual-stack instance, enter RequireDualStack . These resources generate corresponding endpoints . The Ingress Controller now watches endpointslices . To view endpoints , enter the following command: USD oc get endpoints To view endpointslices , enter the following command: USD oc get endpointslices Additional resources Specifying an alternative cluster domain using the appsDomain option 25.2. Secured routes Secure routes provide the ability to use several types of TLS termination to serve certificates to the client. The following sections describe how to create re-encrypt, edge, and passthrough routes with custom certificates. Important If you create routes in Microsoft Azure through public endpoints, the resource names are subject to restriction. You cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 25.2.1. Creating a re-encrypt route with a custom certificate You can configure a secure route using reencrypt TLS termination with a custom certificate by using the oc create route command. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and reencrypt TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You must also specify a destination CA certificate to enable the Ingress Controller to trust the service's certificate. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , cacert.crt , and (optionally) ca.crt . Substitute the name of the Service resource that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using reencrypt TLS termination and a custom certificate: USD oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route reencrypt --help for more options. 25.2.2. Creating an edge route with a custom certificate You can configure a secure route using edge TLS termination with a custom certificate by using the oc create route command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the destination pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and edge TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , and (optionally) ca.crt . Substitute the name of the service that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using edge TLS termination and a custom certificate. USD oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route edge --help for more options. 25.2.3. Creating a passthrough route You can configure a secure route using passthrough termination by using the oc create route command. With passthrough termination, encrypted traffic is sent straight to the destination without the router providing TLS termination. Therefore no key or certificate is required on the route. Prerequisites You must have a service that you want to expose. Procedure Create a Route resource: USD oc create route passthrough route-passthrough-secured --service=frontend --port=8080 If you examine the resulting Route resource, it should look similar to the following: A Secured Route Using Passthrough Termination apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend 1 The name of the object, which is limited to 63 characters. 2 The termination field is set to passthrough . This is the only required tls field. 3 Optional insecureEdgeTerminationPolicy . The only valid values are None , Redirect , or empty for disabled. The destination pod is responsible for serving certificates for the traffic at the endpoint. This is currently the only method that can support requiring client certificates, also known as two-way authentication. | [
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"oc expose svc hello-openshift",
"oc get routes -o yaml <name of resource> 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: www.example.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift",
"oc get ingresses.config/cluster -o jsonpath={.spec.domain}",
"oc new-project hello-openshift",
"oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json",
"oc expose pod/hello-openshift",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift",
"oc -n hello-openshift create -f hello-openshift-route.yaml",
"oc -n hello-openshift get routes/hello-openshift-edge -o yaml",
"apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3",
"oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1",
"oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000;\\ 1 includeSubDomains;preload\"",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 spec: host: def.abc.com tls: termination: \"reencrypt\" wildcardPolicy: \"Subdomain\"",
"oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0",
"oc annotate route --all -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: routename HSTS: max-age=0",
"oc edit ingresses.config.openshift.io/cluster",
"apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: 7 - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains",
"oc annotate route --all --all-namespaces --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc annotate route --all -n my-namespace --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"",
"oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{\"\\n\"}{end}'",
"oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'",
"Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains",
"tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1",
"tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789",
"oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"",
"oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"",
"ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')",
"curl USDROUTE_NAME -k -c /tmp/cookie_jar",
"curl USDROUTE_NAME -k -b /tmp/cookie_jar",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24",
"metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8",
"apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1",
"oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge",
"spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" 1 route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2 spec: rules: - host: www.example.com 3 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate",
"spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443",
"oc apply -f ingress.yaml",
"oc get routes",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- to: kind: Service name: frontend",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend spec: rules: tls: - {} 1",
"oc create -f example-ingress.yaml",
"oc get routes -o yaml",
"apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 spec: tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3",
"oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path>",
"oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt",
"secret/dest-ca-cert created",
"apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: \"<resource_version_number>\" selfLink: \"/api/v1/namespaces/<namespace_name>/services/<service_name>\" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {}",
"oc get endpoints",
"oc get endpointslices",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"openssl rsa -in password_protected_tls.key -out tls.key",
"oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----",
"oc create route passthrough route-passthrough-secured --service=frontend --port=8080",
"apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/configuring-routes |
Providing feedback on Red Hat build of Quarkus documentation | Providing feedback on Red Hat build of Quarkus documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html/configuring_your_red_hat_build_of_quarkus_applications_by_using_a_properties_file/proc_providing-feedback-on-red-hat-documentation_quarkus-configuration-guide |
9.3. JSON Representation of a Data Center | 9.3. JSON Representation of a Data Center Example 9.2. A JSON representation of a data center | [
"{ \"data_center\" : [ { \"local\" : \"false\", \"storage_format\" : \"v3\", \"version\" : { \"major\" : \"4\", \"minor\" : \"0\" }, \"supported_versions\" : { \"version\" : [ { \"major\" : \"4\", \"minor\" : \"0\" } ] }, \"status\" : { \"state\" : \"up\" }, \"mac_pool\": { \"href\": \"/ovirt-engine/api/macpools/00000000-0000-0000-0000-000000000000\", \"id\": \"00000000-0000-0000-0000-000000000000\" }, \"name\" : \"Default\", \"description\" : \"The default Data Center\", \"href\" : \"/ovirt-engine/api/datacenters/00000002-0002-0002-0002-000000000255\", \"id\" : \"00000002-0002-0002-0002-000000000255\", \"link\" : [ { \"href\" : \"/ovirt-engine/api/datacenters/00000002-0002-0002-0002-000000000255/storagedomains\", \"rel\" : \"storagedomains\" }, { \"href\" : \"/ovirt-engine/api/datacenters/00000002-0002-0002-0002-000000000255/clusters\", \"rel\" : \"clusters\" }, { \"href\" : \"/ovirt-engine/api/datacenters/00000002-0002-0002-0002-000000000255/networks\", \"rel\" : \"networks\" }, { \"href\" : \"/ovirt-engine/api/datacenters/00000002-0002-0002-0002-000000000255/permissions\", \"rel\" : \"permissions\" }, { \"href\" : \"/ovirt-engine/api/datacenters/00000002-0002-0002-0002-000000000255/quotas\", \"rel\" : \"quotas\" }, { \"href\" : \"/ovirt-engine/api/datacenters/00000002-0002-0002-0002-000000000255/iscsibonds\", \"rel\" : \"iscsibonds\" }, { \"href\" : \"/ovirt-engine/api/datacenters/00000002-0002-0002-0002-000000000255/qoss\", \"rel\" : \"qoss\" } ] } ] }"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/json_representation_of_a_data_center |
Chapter 17. Troubleshooting Data Grid Server deployments | Chapter 17. Troubleshooting Data Grid Server deployments Gather diagnostic information about Data Grid Server deployments and perform troubleshooting steps to resolve issues. 17.1. Getting diagnostic reports from Data Grid Server Data Grid Server provides aggregated reports in tar.gz archives that contain diagnostic information about server instances and host systems. The report provides details about CPU, memory, open files, network sockets and routing, threads, in addition to configuration and log files. Procedure Create a CLI connection to Data Grid Server. Use the server report command to download a tar.gz archive: The command responds with the name of the report, as in the following example: Move the tar.gz file to a suitable location on your filesystem. Extract the tar.gz file with any archiving tool. 17.2. Changing Data Grid Server logging configuration at runtime Modify the logging configuration for Data Grid Server at runtime to temporarily adjust logging to troubleshoot issues and perform root cause analysis. Modifying the logging configuration through the CLI is a runtime-only operation, which means that changes: Are not saved to the log4j2.xml file. Restarting server nodes or the entire cluster resets the logging configuration to the default properties in the log4j2.xml file. Apply only to the nodes in the cluster when you invoke the CLI. Nodes that join the cluster after you change the logging configuration use the default properties. Procedure Create a CLI connection to Data Grid Server. Use the logging to make the required adjustments. List all appenders defined on the server: The command provides a JSON response such as the following: { "STDOUT" : { "name" : "STDOUT" }, "JSON-FILE" : { "name" : "JSON-FILE" }, "HR-ACCESS-FILE" : { "name" : "HR-ACCESS-FILE" }, "FILE" : { "name" : "FILE" }, "REST-ACCESS-FILE" : { "name" : "REST-ACCESS-FILE" } } List all logger configurations defined on the server: The command provides a JSON response such as the following: [ { "name" : "", "level" : "INFO", "appenders" : [ "STDOUT", "FILE" ] }, { "name" : "org.infinispan.HOTROD_ACCESS_LOG", "level" : "INFO", "appenders" : [ "HR-ACCESS-FILE" ] }, { "name" : "com.arjuna", "level" : "WARN", "appenders" : [ ] }, { "name" : "org.infinispan.REST_ACCESS_LOG", "level" : "INFO", "appenders" : [ "REST-ACCESS-FILE" ] } ] Add and modify logger configurations with the set subcommand For example, the following command sets the logging level for the org.infinispan package to DEBUG : Remove existing logger configurations with the remove subcommand. For example, the following command removes the org.infinispan logger configuration, which means the root configuration is used instead: 17.3. Gathering resource statistics from the CLI You can inspect server-collected statistics for some Data Grid Server resources with the stats command. Use the stats command either from the context of a resource that provides statistics (containers, caches) or with a path to such a resource: { "statistics_enabled" : true, "number_of_entries" : 0, "hit_ratio" : 0.0, "read_write_ratio" : 0.0, "time_since_start" : 0, "time_since_reset" : 49, "current_number_of_entries" : 0, "current_number_of_entries_in_memory" : 0, "off_heap_memory_used" : 0, "data_memory_used" : 0, "stores" : 0, "retrievals" : 0, "hits" : 0, "misses" : 0, "remove_hits" : 0, "remove_misses" : 0, "evictions" : 0, "average_read_time" : 0, "average_read_time_nanos" : 0, "average_write_time" : 0, "average_write_time_nanos" : 0, "average_remove_time" : 0, "average_remove_time_nanos" : 0, "required_minimum_number_of_nodes" : -1 } { "time_since_start" : -1, "time_since_reset" : -1, "current_number_of_entries" : -1, "current_number_of_entries_in_memory" : -1, "off_heap_memory_used" : -1, "data_memory_used" : -1, "stores" : -1, "retrievals" : -1, "hits" : -1, "misses" : -1, "remove_hits" : -1, "remove_misses" : -1, "evictions" : -1, "average_read_time" : -1, "average_read_time_nanos" : -1, "average_write_time" : -1, "average_write_time_nanos" : -1, "average_remove_time" : -1, "average_remove_time_nanos" : -1, "required_minimum_number_of_nodes" : -1 } 17.4. Accessing cluster health via REST Get Data Grid cluster health via the REST API. Procedure Invoke a GET request to retrieve cluster health. Data Grid responds with a JSON document such as the following: { "cluster_health":{ "cluster_name":"ISPN", "health_status":"HEALTHY", "number_of_nodes":2, "node_names":[ "NodeA-36229", "NodeB-28703" ] }, "cache_health":[ { "status":"HEALTHY", "cache_name":"___protobuf_metadata" }, { "status":"HEALTHY", "cache_name":"cache2" }, { "status":"HEALTHY", "cache_name":"mycache" }, { "status":"HEALTHY", "cache_name":"cache1" } ] } Tip Get Cache Manager status as follows: Reference See the REST v2 (version 2) API documentation for more information. 17.5. Accessing cluster health via JMX Retrieve Data Grid cluster health statistics via JMX. Procedure Connect to Data Grid server using any JMX capable tool such as JConsole and navigate to the following object: Select available MBeans to retrieve cluster health statistics. | [
"server report Downloaded report 'infinispan-<hostname>-<timestamp>-report.tar.gz'",
"Downloaded report 'infinispan-<hostname>-<timestamp>-report.tar.gz'",
"logging list-appenders",
"{ \"STDOUT\" : { \"name\" : \"STDOUT\" }, \"JSON-FILE\" : { \"name\" : \"JSON-FILE\" }, \"HR-ACCESS-FILE\" : { \"name\" : \"HR-ACCESS-FILE\" }, \"FILE\" : { \"name\" : \"FILE\" }, \"REST-ACCESS-FILE\" : { \"name\" : \"REST-ACCESS-FILE\" } }",
"logging list-loggers",
"[ { \"name\" : \"\", \"level\" : \"INFO\", \"appenders\" : [ \"STDOUT\", \"FILE\" ] }, { \"name\" : \"org.infinispan.HOTROD_ACCESS_LOG\", \"level\" : \"INFO\", \"appenders\" : [ \"HR-ACCESS-FILE\" ] }, { \"name\" : \"com.arjuna\", \"level\" : \"WARN\", \"appenders\" : [ ] }, { \"name\" : \"org.infinispan.REST_ACCESS_LOG\", \"level\" : \"INFO\", \"appenders\" : [ \"REST-ACCESS-FILE\" ] } ]",
"logging set --level=DEBUG org.infinispan",
"logging remove org.infinispan",
"stats",
"{ \"statistics_enabled\" : true, \"number_of_entries\" : 0, \"hit_ratio\" : 0.0, \"read_write_ratio\" : 0.0, \"time_since_start\" : 0, \"time_since_reset\" : 49, \"current_number_of_entries\" : 0, \"current_number_of_entries_in_memory\" : 0, \"off_heap_memory_used\" : 0, \"data_memory_used\" : 0, \"stores\" : 0, \"retrievals\" : 0, \"hits\" : 0, \"misses\" : 0, \"remove_hits\" : 0, \"remove_misses\" : 0, \"evictions\" : 0, \"average_read_time\" : 0, \"average_read_time_nanos\" : 0, \"average_write_time\" : 0, \"average_write_time_nanos\" : 0, \"average_remove_time\" : 0, \"average_remove_time_nanos\" : 0, \"required_minimum_number_of_nodes\" : -1 }",
"stats /containers/default/caches/mycache",
"{ \"time_since_start\" : -1, \"time_since_reset\" : -1, \"current_number_of_entries\" : -1, \"current_number_of_entries_in_memory\" : -1, \"off_heap_memory_used\" : -1, \"data_memory_used\" : -1, \"stores\" : -1, \"retrievals\" : -1, \"hits\" : -1, \"misses\" : -1, \"remove_hits\" : -1, \"remove_misses\" : -1, \"evictions\" : -1, \"average_read_time\" : -1, \"average_read_time_nanos\" : -1, \"average_write_time\" : -1, \"average_write_time_nanos\" : -1, \"average_remove_time\" : -1, \"average_remove_time_nanos\" : -1, \"required_minimum_number_of_nodes\" : -1 }",
"GET /rest/v2/container/health",
"{ \"cluster_health\":{ \"cluster_name\":\"ISPN\", \"health_status\":\"HEALTHY\", \"number_of_nodes\":2, \"node_names\":[ \"NodeA-36229\", \"NodeB-28703\" ] }, \"cache_health\":[ { \"status\":\"HEALTHY\", \"cache_name\":\"___protobuf_metadata\" }, { \"status\":\"HEALTHY\", \"cache_name\":\"cache2\" }, { \"status\":\"HEALTHY\", \"cache_name\":\"mycache\" }, { \"status\":\"HEALTHY\", \"cache_name\":\"cache1\" } ] }",
"GET /rest/v2/container/health/status",
"org.infinispan:type=CacheManager,name=\"default\",component=CacheContainerHealth"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_server_guide/tshoot_server |
Chapter 16. Troubleshooting Network Observability | Chapter 16. Troubleshooting Network Observability To assist in troubleshooting Network Observability issues, you can perform some troubleshooting actions. 16.1. Using the must-gather tool You can use the must-gather tool to collect information about the Network Observability Operator resources and cluster-wide resources, such as pod logs, FlowCollector , and webhook configurations. Procedure Navigate to the directory where you want to store the must-gather data. Run the following command to collect cluster-wide must-gather resources: USD oc adm must-gather --image-stream=openshift/must-gather \ --image=quay.io/netobserv/must-gather 16.2. Configuring network traffic menu entry in the OpenShift Container Platform console Manually configure the network traffic menu entry in the OpenShift Container Platform console when the network traffic menu entry is not listed in Observe menu in the OpenShift Container Platform console. Prerequisites You have installed OpenShift Container Platform version 4.10 or newer. Procedure Check if the spec.consolePlugin.register field is set to true by running the following command: USD oc -n netobserv get flowcollector cluster -o yaml Example output Optional: Add the netobserv-plugin plugin by manually editing the Console Operator config: USD oc edit console.operator.openshift.io cluster Example output Optional: Set the spec.consolePlugin.register field to true by running the following command: USD oc -n netobserv edit flowcollector cluster -o yaml Example output Ensure the status of console pods is running by running the following command: USD oc get pods -n openshift-console -l app=console Restart the console pods by running the following command: USD oc delete pods -n openshift-console -l app=console Clear your browser cache and history. Check the status of Network Observability plugin pods by running the following command: USD oc get pods -n netobserv -l app=netobserv-plugin Example output Check the logs of the Network Observability plugin pods by running the following command: USD oc logs -n netobserv -l app=netobserv-plugin Example output time="2022-12-13T12:06:49Z" level=info msg="Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info" module=main time="2022-12-13T12:06:49Z" level=info msg="listening on https://:9001" module=server 16.3. Flowlogs-Pipeline does not consume network flows after installing Kafka If you deployed the flow collector first with deploymentModel: KAFKA and then deployed Kafka, the flow collector might not connect correctly to Kafka. Manually restart the flow-pipeline pods where Flowlogs-pipeline does not consume network flows from Kafka. Procedure Delete the flow-pipeline pods to restart them by running the following command: USD oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer 16.4. Failing to see network flows from both br-int and br-ex interfaces br-ex` and br-int are virtual bridge devices operated at OSI layer 2. The eBPF agent works at the IP and TCP levels, layers 3 and 4 respectively. You can expect that the eBPF agent captures the network traffic passing through br-ex and br-int , when the network traffic is processed by other interfaces such as physical host or virtual pod interfaces. If you restrict the eBPF agent network interfaces to attach only to br-ex and br-int , you do not see any network flow. Manually remove the part in the interfaces or excludeInterfaces that restricts the network interfaces to br-int and br-ex . Procedure Remove the interfaces: [ 'br-int', 'br-ex' ] field. This allows the agent to fetch information from all the interfaces. Alternatively, you can specify the Layer-3 interface for example, eth0 . Run the following command: USD oc edit -n netobserv flowcollector.yaml -o yaml Example output 1 Specifies the network interfaces. 16.5. Network Observability controller manager pod runs out of memory You can increase memory limits for the Network Observability operator by editing the spec.config.resources.limits.memory specification in the Subscription object. Procedure In the web console, navigate to Operators Installed Operators Click Network Observability and then select Subscription . From the Actions menu, click Edit Subscription . Alternatively, you can use the CLI to open the YAML configuration for the Subscription object by running the following command: USD oc edit subscription netobserv-operator -n openshift-netobserv-operator Edit the Subscription object to add the config.resources.limits.memory specification and set the value to account for your memory requirements. See the Additional resources for more information about resource considerations: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: netobserv-operator namespace: openshift-netobserv-operator spec: channel: stable config: resources: limits: memory: 800Mi 1 requests: cpu: 100m memory: 100Mi installPlanApproval: Automatic name: netobserv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: <network_observability_operator_latest_version> 2 1 For example, you can increase the memory limit to 800Mi . 2 This value should not be edited, but note that it changes depending on the most current release of the Operator. 16.6. Running custom queries to Loki For troubleshooting, can run custom queries to Loki. There are two examples of ways to do this, which you can adapt according to your needs by replacing the <api_token> with your own. Note These examples use the netobserv namespace for the Network Observability Operator and Loki deployments. Additionally, the examples assume that the LokiStack is named loki . You can optionally use a different namespace and naming by adapting the examples, specifically the -n netobserv or the loki-gateway URL. Prerequisites Installed Loki Operator for use with Network Observability Operator Procedure To get all available labels, run the following: USD oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/labels | jq To get all flows from the source namespace, my-namespace , run the following: USD oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/query --data-urlencode 'query={SrcK8S_Namespace="my-namespace"}' | jq Additional resources Resource considerations 16.7. Troubleshooting Loki ResourceExhausted error Loki may return a ResourceExhausted error when network flow data sent by Network Observability exceeds the configured maximum message size. If you are using the Red Hat Loki Operator, this maximum message size is configured to 100 MiB. Procedure Navigate to Operators Installed Operators , viewing All projects from the Project drop-down menu. In the Provided APIs list, select the Network Observability Operator. Click the Flow Collector then the YAML view tab. If you are using the Loki Operator, check that the spec.loki.batchSize value does not exceed 98 MiB. If you are using a Loki installation method that is different from the Red Hat Loki Operator, such as Grafana Loki, verify that the grpc_server_max_recv_msg_size Grafana Loki server setting is higher than the FlowCollector resource spec.loki.batchSize value. If it is not, you must either increase the grpc_server_max_recv_msg_size value, or decrease the spec.loki.batchSize value so that it is lower than the limit. Click Save if you edited the FlowCollector . 16.8. Loki empty ring error The Loki "empty ring" error results in flows not being stored in Loki and not showing up in the web console. This error might happen in various situations. A single workaround to address them all does not exist. There are some actions you can take to investigate the logs in your Loki pods, and verify that the LokiStack is healthy and ready. Some of the situations where this error is observed are as follows: After a LokiStack is uninstalled and reinstalled in the same namespace, old PVCs are not removed, which can cause this error. Action : You can try removing the LokiStack again, removing the PVC, then reinstalling the LokiStack . After a certificate rotation, this error can prevent communication with the flowlogs-pipeline and console-plugin pods. Action : You can restart the pods to restore the connectivity. 16.9. Resource troubleshooting 16.10. LokiStack rate limit errors A rate-limit placed on the Loki tenant can result in potential temporary loss of data and a 429 error: Per stream rate limit exceeded (limit:xMB/sec) while attempting to ingest for stream . You might consider having an alert set to notify you of this error. For more information, see "Creating Loki rate limit alerts for the NetObserv dashboard" in the Additional resources of this section. You can update the LokiStack CRD with the perStreamRateLimit and perStreamRateLimitBurst specifications, as shown in the following procedure. Procedure Navigate to Operators Installed Operators , viewing All projects from the Project dropdown. Look for Loki Operator , and select the LokiStack tab. Create or edit an existing LokiStack instance using the YAML view to add the perStreamRateLimit and perStreamRateLimitBurst specifications: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: limits: global: ingestion: perStreamRateLimit: 6 1 perStreamRateLimitBurst: 30 2 tenants: mode: openshift-network managementState: Managed 1 The default value for perStreamRateLimit is 3 . 2 The default value for perStreamRateLimitBurst is 15 . Click Save . Verification Once you update the perStreamRateLimit and perStreamRateLimitBurst specifications, the pods in your cluster restart and the 429 rate-limit error no longer occurs. 16.11. Running a large query results in Loki errors When running large queries for a long time, Loki errors can occur, such as a timeout or too many outstanding requests . There is no complete corrective for this issue, but there are several ways to mitigate it: Adapt your query to add an indexed filter With Loki queries, you can query on both indexed and non-indexed fields or labels. Queries that contain filters on labels perform better. For example, if you query for a particular Pod, which is not an indexed field, you can add its Namespace to the query. The list of indexed fields can be found in the "Network flows format reference", in the Loki label column. Consider querying Prometheus rather than Loki Prometheus is a better fit than Loki to query on large time ranges. However, whether or not you can use Prometheus instead of Loki depends on the use case. For example, queries on Prometheus are much faster than on Loki, and large time ranges do not impact performance. But Prometheus metrics do not contain as much information as flow logs in Loki. The Network Observability OpenShift web console automatically favors Prometheus over Loki if the query is compatible; otherwise, it defaults to Loki. If your query does not run against Prometheus, you can change some filters or aggregations to make the switch. In the OpenShift web console, you can force the use of Prometheus. An error message is displayed when incompatible queries fail, which can help you figure out which labels to change to make the query compatible. For example, changing a filter or an aggregation from Resource or Pods to Owner . Consider using the FlowMetrics API to create your own metric If the data that you need isn't available as a Prometheus metric, you can use the FlowMetrics API to create your own metric. For more information, see "FlowMetrics API Reference" and "Configuring custom metrics by using FlowMetric API". Configure Loki to improve the query performance If the problem persists, you can consider configuring Loki to improve the query performance. Some options depend on the installation mode you used for Loki, such as using the Operator and LokiStack , or Monolithic mode, or Microservices mode. In LokiStack or Microservices modes, try increasing the number of querier replicas . Increase the query timeout . You must also increase the Network Observability read timeout to Loki in the FlowCollector spec.loki.readTimeout . Additional resources Network flows format reference FlowMetric API reference Configuring custom metrics by using FlowMetric API | [
"oc adm must-gather --image-stream=openshift/must-gather --image=quay.io/netobserv/must-gather",
"oc -n netobserv get flowcollector cluster -o yaml",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: false",
"oc edit console.operator.openshift.io cluster",
"spec: plugins: - netobserv-plugin",
"oc -n netobserv edit flowcollector cluster -o yaml",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: true",
"oc get pods -n openshift-console -l app=console",
"oc delete pods -n openshift-console -l app=console",
"oc get pods -n netobserv -l app=netobserv-plugin",
"NAME READY STATUS RESTARTS AGE netobserv-plugin-68c7bbb9bb-b69q6 1/1 Running 0 21s",
"oc logs -n netobserv -l app=netobserv-plugin",
"time=\"2022-12-13T12:06:49Z\" level=info msg=\"Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info\" module=main time=\"2022-12-13T12:06:49Z\" level=info msg=\"listening on https://:9001\" module=server",
"oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer",
"oc edit -n netobserv flowcollector.yaml -o yaml",
"apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: agent: type: EBPF ebpf: interfaces: [ 'br-int', 'br-ex' ] 1",
"oc edit subscription netobserv-operator -n openshift-netobserv-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: netobserv-operator namespace: openshift-netobserv-operator spec: channel: stable config: resources: limits: memory: 800Mi 1 requests: cpu: 100m memory: 100Mi installPlanApproval: Automatic name: netobserv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: <network_observability_operator_latest_version> 2",
"oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/labels | jq",
"oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/query --data-urlencode 'query={SrcK8S_Namespace=\"my-namespace\"}' | jq",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: limits: global: ingestion: perStreamRateLimit: 6 1 perStreamRateLimitBurst: 30 2 tenants: mode: openshift-network managementState: Managed"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/network_observability/installing-troubleshooting |
Configure data sources | Configure data sources Red Hat build of Quarkus 3.8 Red Hat Customer Content Services | [
"quarkus.datasource.db-kind=postgresql 1 quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test quarkus.datasource.jdbc.max-size=16",
"quarkus.datasource.db-kind=postgresql 1 quarkus.datasource.username=<your username> quarkus.datasource.password=<your password> quarkus.datasource.reactive.url=postgresql:///your_database quarkus.datasource.reactive.max-size=20",
"quarkus.datasource.db-kind=h2",
"quarkus.datasource.username=<your username> quarkus.datasource.password=<your password>",
"./mvnw quarkus:add-extension -Dextensions=\"jdbc-postgresql\"",
"./mvnw quarkus:add-extension -Dextensions=\"agroal\"",
"quarkus.datasource.jdbc.url=jdbc:postgresql://localhost:5432/hibernate_orm_test",
"quarkus.datasource.jdbc.driver=io.opentracing.contrib.jdbc.TracingDriver",
"quarkus.datasource.db-kind=other quarkus.datasource.jdbc.driver=oracle.jdbc.driver.OracleDriver quarkus.datasource.jdbc.url=jdbc:oracle:thin:@192.168.1.12:1521/ORCL_SVC quarkus.datasource.username=scott quarkus.datasource.password=tiger",
"@Inject AgroalDataSource defaultDataSource;",
"quarkus.datasource.reactive.url=postgresql:///your_database quarkus.datasource.reactive.max-size=20",
"quarkus.datasource.jdbc=false",
"quarkus.datasource.reactive=false",
"quarkus.datasource.db-kind=h2 quarkus.datasource.username=username-default quarkus.datasource.jdbc.url=jdbc:h2:mem:default quarkus.datasource.jdbc.max-size=13 quarkus.datasource.users.db-kind=h2 quarkus.datasource.users.username=username1 quarkus.datasource.users.jdbc.url=jdbc:h2:mem:users quarkus.datasource.users.jdbc.max-size=11 quarkus.datasource.inventory.db-kind=h2 quarkus.datasource.inventory.username=username2 quarkus.datasource.inventory.jdbc.url=jdbc:h2:mem:inventory quarkus.datasource.inventory.jdbc.max-size=12",
"@Inject AgroalDataSource defaultDataSource; @Inject @DataSource(\"users\") AgroalDataSource usersDataSource; @Inject @DataSource(\"inventory\") AgroalDataSource inventoryDataSource;",
"quarkus.datasource.\"pg\".db-kind=postgres quarkus.datasource.\"pg\".active=false quarkus.datasource.\"pg\".jdbc.url=jdbc:postgresql:///your_database quarkus.datasource.\"oracle\".db-kind=oracle quarkus.datasource.\"oracle\".active=false quarkus.datasource.\"oracle\".jdbc.url=jdbc:oracle:///your_database",
"%pg.quarkus.hibernate-orm.\"pg\".active=true %pg.quarkus.datasource.\"pg\".active=true Add any pg-related runtime configuration here, prefixed with \"%pg.\" %oracle.quarkus.hibernate-orm.\"oracle\".active=true %oracle.quarkus.datasource.\"oracle\".active=true Add any pg-related runtime configuration here, prefixed with \"%pg.\"",
"public class MyProducer { @Inject DataSourceSupport dataSourceSupport; @Inject @DataSource(\"pg\") AgroalDataSource pgDataSourceBean; @Inject @DataSource(\"oracle\") AgroalDataSource oracleDataSourceBean; @Produces @ApplicationScoped public AgroalDataSource dataSource() { if (dataSourceSupport.getInactiveNames().contains(\"pg\")) { return oracleDataSourceBean; } else { return pgDataSourceBean; } } }",
"quarkus.datasource.\"datasource-name\".health-exclude=true",
"enable tracing quarkus.datasource.jdbc.telemetry=true"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.8/html-single/configure_data_sources/index |
Chapter 6. EgressFirewall [k8s.ovn.org/v1] | Chapter 6. EgressFirewall [k8s.ovn.org/v1] Description EgressFirewall describes the current egress firewall for a Namespace. Traffic from a pod to an IP address outside the cluster will be checked against each EgressFirewallRule in the pod's namespace's EgressFirewall, in order. If no rule matches (or no EgressFirewall is present) then the traffic will be allowed by default. Type object Required spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object Specification of the desired behavior of EgressFirewall. status object Observed status of EgressFirewall 6.1.1. .spec Description Specification of the desired behavior of EgressFirewall. Type object Required egress Property Type Description egress array a collection of egress firewall rule objects egress[] object EgressFirewallRule is a single egressfirewall rule object 6.1.2. .spec.egress Description a collection of egress firewall rule objects Type array 6.1.3. .spec.egress[] Description EgressFirewallRule is a single egressfirewall rule object Type object Required to type Property Type Description ports array ports specify what ports and protocols the rule applies to ports[] object EgressFirewallPort specifies the port to allow or deny traffic to to object to is the target that traffic is allowed/denied to type string type marks this as an "Allow" or "Deny" rule 6.1.4. .spec.egress[].ports Description ports specify what ports and protocols the rule applies to Type array 6.1.5. .spec.egress[].ports[] Description EgressFirewallPort specifies the port to allow or deny traffic to Type object Required port protocol Property Type Description port integer port that the traffic must match protocol string protocol (tcp, udp, sctp) that the traffic must match. 6.1.6. .spec.egress[].to Description to is the target that traffic is allowed/denied to Type object Property Type Description cidrSelector string cidrSelector is the CIDR range to allow/deny traffic to. If this is set, dnsName and nodeSelector must be unset. dnsName string dnsName is the domain name to allow/deny traffic to. If this is set, cidrSelector and nodeSelector must be unset. For a wildcard DNS name, the ' ' will match only one label. Additionally, only a single ' ' can be used at the beginning of the wildcard DNS name. For example, '*.example.com' will match 'sub1.example.com' but won't match 'sub2.sub1.example.com'. nodeSelector object nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. 6.1.7. .spec.egress[].to.nodeSelector Description nodeSelector will allow/deny traffic to the Kubernetes node IP of selected nodes. If this is set, cidrSelector and DNSName must be unset. Type object Property Type Description matchExpressions array matchExpressions is a list of label selector requirements. The requirements are ANDed. matchExpressions[] object A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. matchLabels object (string) matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. 6.1.8. .spec.egress[].to.nodeSelector.matchExpressions Description matchExpressions is a list of label selector requirements. The requirements are ANDed. Type array 6.1.9. .spec.egress[].to.nodeSelector.matchExpressions[] Description A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. Type object Required key operator Property Type Description key string key is the label key that the selector applies to. operator string operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. values array (string) values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. 6.1.10. .status Description Observed status of EgressFirewall Type object Property Type Description messages array (string) status string 6.2. API endpoints The following API endpoints are available: /apis/k8s.ovn.org/v1/egressfirewalls GET : list objects of kind EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls DELETE : delete collection of EgressFirewall GET : list objects of kind EgressFirewall POST : create an EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name} DELETE : delete an EgressFirewall GET : read the specified EgressFirewall PATCH : partially update the specified EgressFirewall PUT : replace the specified EgressFirewall /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name}/status GET : read status of the specified EgressFirewall PATCH : partially update status of the specified EgressFirewall PUT : replace status of the specified EgressFirewall 6.2.1. /apis/k8s.ovn.org/v1/egressfirewalls HTTP method GET Description list objects of kind EgressFirewall Table 6.1. HTTP responses HTTP code Reponse body 200 - OK EgressFirewallList schema 401 - Unauthorized Empty 6.2.2. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls HTTP method DELETE Description delete collection of EgressFirewall Table 6.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind EgressFirewall Table 6.3. HTTP responses HTTP code Reponse body 200 - OK EgressFirewallList schema 401 - Unauthorized Empty HTTP method POST Description create an EgressFirewall Table 6.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.5. Body parameters Parameter Type Description body EgressFirewall schema Table 6.6. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 202 - Accepted EgressFirewall schema 401 - Unauthorized Empty 6.2.3. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name} Table 6.7. Global path parameters Parameter Type Description name string name of the EgressFirewall HTTP method DELETE Description delete an EgressFirewall Table 6.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified EgressFirewall Table 6.10. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified EgressFirewall Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.12. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified EgressFirewall Table 6.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.14. Body parameters Parameter Type Description body EgressFirewall schema Table 6.15. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 401 - Unauthorized Empty 6.2.4. /apis/k8s.ovn.org/v1/namespaces/{namespace}/egressfirewalls/{name}/status Table 6.16. Global path parameters Parameter Type Description name string name of the EgressFirewall HTTP method GET Description read status of the specified EgressFirewall Table 6.17. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified EgressFirewall Table 6.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.19. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified EgressFirewall Table 6.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.21. Body parameters Parameter Type Description body EgressFirewall schema Table 6.22. HTTP responses HTTP code Reponse body 200 - OK EgressFirewall schema 201 - Created EgressFirewall schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_apis/egressfirewall-k8s-ovn-org-v1 |
User and group APIs | User and group APIs OpenShift Container Platform 4.15 Reference guide for user and group APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/user_and_group_apis/index |
13.9. Hot Rod Java Client | 13.9. Hot Rod Java Client Hot Rod is a binary, language neutral protocol. A Java client is able to interact with a server via the Hot Rod protocol using the Hot Rod Java Client API. Report a bug 13.9.1. Hot Rod Java Client Download Use the following steps to download the JBoss Data Grid Hot Rod Java Client: Procedure 13.2. Download Hot Rod Java Client Log into the Customer Portal at https://access.redhat.com . Click the Downloads button near the top of the page. In the Product Downloads page, click Red Hat JBoss Data Grid . Select the appropriate JBoss Data Grid version from the Version: drop down menu. Locate the Red Hat JBoss Data Grid USD{VERSION} Hot Rod Java Client entry and click the corresponding Download link. Report a bug 13.9.2. Hot Rod Java Client Configuration The Hot Rod Java client is configured both programmatically and externally using a configuration file or a properties file. The following example illustrate creation of a client instance using the available Java fluent API: Example 13.1. Client Instance Creation Configuring the Hot Rod Java client using a properties file To configure the Hot Rod Java client, edit the hotrod-client.properties file on the classpath. The following example shows the possible content of the hotrod-client.properties file. Example 13.2. Configuration Note The TCP KEEPALIVE configuration is enabled/disabled on the Hot Rod Java client either through a config property as seen in the example ( infinispan.client.hotrod.tcp_keep_alive = true/false or programmatically through the org.infinispan.client.hotrod.ConfigurationBuilder.tcpKeepAlive() method. Either of the following two constructors must be used in order for the properties file to be consumed by Red Hat JBoss Data Grid: new RemoteCacheManager(boolean start) new RemoteCacheManager() Report a bug 13.9.3. Hot Rod Java Client Basic API The following code shows how the client API can be used to store or retrieve information from a Hot Rod server using the Hot Rod Java client. This example assumes that a Hot Rod server has been started bound to the default location, localhost:11222 . Example 13.3. Basic API The RemoteCacheManager corresponds to DefaultCacheManager , and both implement BasicCacheContainer . This API facilitates migration from local calls to remote calls via Hot Rod. This can be done by switching between DefaultCacheManager and RemoteCacheManager , which is simplified by the common BasicCacheContainer interface. All keys can be retrieved from the remote cache using the keySet() method. If the remote cache is a distributed cache, the server will start a Map/Reduce job to retrieve all keys from clustered nodes and return all keys to the client. Use this method with caution if there are a large number of keys. Report a bug 13.9.4. Hot Rod Java Client Versioned API To ensure data consistency, Hot Rod stores a version number that uniquely identifies each modification. Using getVersioned , clients can retrieve the value associated with the key as well as the current version. When using the Hot Rod Java client, a RemoteCacheManager provides instances of the RemoteCache interface that accesses the named or default cache on the remote cluster. This extends the Cache interface to which it adds new methods, including the versioned API. Example 13.4. Using Versioned Methods Example 13.5. Using Replace Report a bug | [
"org.infinispan.client.hotrod.configuration.ConfigurationBuilder cb = new org.infinispan.client.hotrod.configuration.ConfigurationBuilder(); cb.tcpNoDelay(true) .connectionPool() .numTestsPerEvictionRun(3) .testOnBorrow(false) .testOnReturn(false) .testWhileIdle(true) .addServer() .host(\"localhost\") .port(11222); RemoteCacheManager rmc = new RemoteCacheManager(cb.build());",
"infinispan.client.hotrod.transport_factory = org.infinispan.client.hotrod.impl.transport.tcp.TcpTransportFactory infinispan.client.hotrod.server_list = 127.0.0.1:11222 infinispan.client.hotrod.marshaller = org.infinispan.commons.marshall.jboss.GenericJBossMarshaller infinispan.client.hotrod.async_executor_factory = org.infinispan.client.hotrod.impl.async.DefaultAsyncExecutorFactory infinispan.client.hotrod.default_executor_factory.pool_size = 1 infinispan.client.hotrod.default_executor_factory.queue_size = 10000 infinispan.client.hotrod.hash_function_impl.1 = org.infinispan.client.hotrod.impl.consistenthash.ConsistentHashV1 infinispan.client.hotrod.tcp_no_delay = true infinispan.client.hotrod.ping_on_startup = true infinispan.client.hotrod.request_balancing_strategy = org.infinispan.client.hotrod.impl.transport.tcp.RoundRobinBalancingStrategy infinispan.client.hotrod.key_size_estimate = 64 infinispan.client.hotrod.value_size_estimate = 512 infinispan.client.hotrod.force_return_values = false infinispan.client.hotrod.tcp_keep_alive = true ## below is connection pooling config maxActive=-1 maxTotal = -1 maxIdle = -1 whenExhaustedAction = 1 timeBetweenEvictionRunsMillis=120000 minEvictableIdleTimeMillis=300000 testWhileIdle = true minIdle = 1",
"//API entry point, by default it connects to localhost:11222 BasicCacheContainer cacheContainer = new RemoteCacheManager(); //obtain a handle to the remote default cache BasicCache<String, String> cache = cacheContainer.getCache(); //now add something to the cache and ensure it is there cache.put(\"car\", \"ferrari\"); assert cache.get(\"car\").equals(\"ferrari\"); //remove the data cache.remove(\"car\"); assert !cache.containsKey(\"car\") : \"Value must have been removed!\";",
"Set keys = remoteCache.keySet();",
"// To use the versioned API, remote classes are specifically needed RemoteCacheManager remoteCacheManager = new RemoteCacheManager(); RemoteCache<String, String> remoteCache = remoteCacheManager.getCache(); remoteCache.put(\"car\", \"ferrari\"); VersionedValue valueBinary = remoteCache.getVersioned(\"car\"); // removal only takes place only if the version has not been changed // in between. (a new version is associated with 'car' key on each change) assert remoteCache.removeWithVersion(\"car\", valueBinary.getVersion()); assert !remoteCache.containsKey(\"car\");",
"remoteCache.put(\"car\", \"ferrari\"); VersionedValue valueBinary = remoteCache.getVersioned(\"car\"); assert remoteCache.replaceWithVersion(\"car\", \"lamborghini\", valueBinary.getVersion());"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-hot_rod_java_client |
4.27. cifs-utils | 4.27. cifs-utils 4.27.1. RHBA-2011:1585 - cifs-utils bug fix update An updated cifs-utils package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The cifs-utils package contains utilities for mounting and managing CIFS shares. Bug Fixes BZ# 676439 Prior to this update, mount.cifs dropped the CAP_DAC_READ_SEARCH flag together with most of the other capability flags before it performed a mount. As a result, mounting onto a directory without execute permissions failed if mount.cifs was installed as a setuid program and the user mount was configured in the /etc/fstab file. This update reinstates the CAP_DAC_READ_SEARCH flag before calling mount. Now, mounting no longer fails. BZ# 719363 Prior to this update, several mount options were missing from the mount.cifs(8) man page. With this update, the man page documents all mount options. All users of cifs-utils are advised to upgrade to this updated cifs-utils package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/cifs-utils |
Chapter 5. Gatekeeper operator overview | Chapter 5. Gatekeeper operator overview The Gatekeeper operator installs Gatekeeper, which is a validating webhook with auditing capabilities. Install the Gatekeeper operator on a Red Hat OpenShift Container Platform cluster from the Operator Lifecycle Manager operator catalog. With Red Hat Advanced Cluster Management for Kubernetes, you can install Gatekeeper on your hub cluster by using the Gatekeeper operator policy. After you install Gatekeeper, use it for the following benefits: Deploy and check Gatekeeper ConstraintTemplates and constraints on managed clusters by using the Red Hat Advanced Cluster Management policy integration. Enforce Kubernetes custom resource definition-based policies that run with your Open Policy Agent (OPA). Evaluate Kubernetes resource compliance requests for the Kubernetes API by using the Gatekeeper constraints. Use OPA as the policy engine and use Rego as the policy language. Prerequisite: You need a Red Hat Advanced Cluster Management for Kubernetes or Red Hat OpenShift Container Platform Plus subscription to install Gatekeeper and apply Gatekeeper policies to your cluster. To learn more about using the Gatekeeper operator, see the following resources: General support Operator channels Configuring the Gatekeeper operator Managing the Gatekeeper operator installation policies Integrating Gatekeeper constraints and constraint templates 5.1. General support To understand the support you receive from the Gatekeeper operator, see the following list: Supports current version of the Gatekeeper operator, preceding versions, and all z-stream releases of those versions. Receive maintenance support and relevant security vulnerability fixes for preceding and current versions. Support for all Red Hat OpenShift Container Platform versions that receive standard support. Note : The Gatekeeper operator is not supported on end-of-life OpenShift Container Platform versions or versions that receive extended support. To view the release notes for the Gatekeeper operator, see gatekeeper-operator-bundle . 5.2. Operator channels With the Gatekeeper operator, you have access to two types of channels to help you make upgrades. These channels are the stable channel and the y-stream version channel. With the stable channel, you can access the latest available version, whether it is an x-stream , y-stream , or z-stream . The stable channel includes the latest version of the latest y-stream channel. With the y-stream version channel, you can access all the z-stream versions for a particular y-stream . 5.3. Configuring the Gatekeeper operator Install the Gatekeeper operator from the Operator Lifecycle Manager catalog to install Gatekeeper on your cluster. With Red Hat Advanced Cluster Management you can use a policy to install the Gatekeeper operator by using the governance framework. After you install the Gatekeeper operator, configure the Gatekeeper operator custom resource to install Gatekeeper. 5.3.1. Prerequisites Required access : Cluster administrator. Understand how to use the Operator Lifecycle Manager (OLM) and the OperatorHub by completing the Adding Operators to a cluster and the Additional resources section in the OpenShift Container Platform documentation . 5.3.2. Gatekeeper custom resource sample The Gatekeeper operator custom resource tells the Gatekeeper operator to start the Gatekeeper installation on the cluster. To install Gatekeeper, use the following sample YAML, which includes sample and default values: apiVersion: operator.gatekeeper.sh/v1alpha1 kind: Gatekeeper metadata: name: gatekeeper spec: audit: replicas: 1 auditEventsInvolvedNamespace: Enabled 1 logLevel: DEBUG auditInterval: 10s constraintViolationLimit: 55 auditFromCache: Enabled auditChunkSize: 66 emitAuditEvents: Enabled containerArguments: 2 - name: "" value: "" resources: limits: cpu: 500m memory: 150Mi requests: cpu: 500m memory: 130Mi validatingWebhook: Enabled mutatingWebhook: Enabled webhook: replicas: 3 emitAdmissionEvents: Enabled admissionEventsInvolvedNamespace: Enabled 3 disabledBuiltins: - http.send operations: 4 - "CREATE" - "UPDATE" - "CONNECT" failurePolicy: Fail containerArguments: 5 - name: "" value: "" resources: limits: cpu: 480m memory: 140Mi requests: cpu: 400m memory: 120Mi nodeSelector: region: "EMEA" affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: auditKey: "auditValue" topologyKey: topology.kubernetes.io/zone tolerations: - key: "Example" operator: "Exists" effect: "NoSchedule" podAnnotations: some-annotation: "this is a test" other-annotation: "another test" config: 6 matches: - excludedNamespaces: ["test-*", "my-namespace"] processes: ["*"] disableDefaultMatches: false 7 1 For version 3.14 and later, enable the auditEventsInvolvedNamespace parameter to manage the namespace audit event you want to create. When you enable this parameter, the Gatekeeper controller deployment runs with the following argument: --audit-events-involved-namespace=true . 3 For version 3.14 and later, enable the admissionEventsInvolvedNamespace parameter to manage the namespace admission event you want to create. When you enable this parameter, the Gatekeeper controller deployment runs with the following argument: --admission-events-involved-namespace=true . 4 For version 3.14 and later, to manage your webhook operations, use the following values for the operations parameter, "CREATE" , "UPDATE" , "CONNECT" , and "DELETE" . 2 5 For version 3.17 and later, specify containerArguments by providing a list of argument names and values to pass to the container. Omit leading dashes from the argument name. An omitted value is treated as true . Arguments that you provide are ignored if the argument is set previously by the operator or configurations from other fields. See the following list of flags that are deny-listed and are not currently supported: port prometheus-port health-addr validating-webhook-configuration-name mutating-webhook-configuration-name disable-cert-rotation client-cert-name tls-min-version 6 Use the config section to exclude namespaces from certain processes for all constraints on your hub cluster. 7 The disableDefaultMatches parameter is a boolean parameter that disables appending the default exempt namespaces provided by the Gatekeeper operator. The default exempt namespaces are OpenShift Container Platform or Kubernetes system namespaces. By default, this parameter is set to false to allow the default namespaces to be appended. 5.3.3. Configuring auditFromCache for sync details For versions 3.14 or later, the Gatekeeper operator exposes a setting in the Gatekeeper operator custom resource for the audit configuration with the auditFromCache parameter, which is disabled by default. Configure the auditFromCache parameter to collect resources from constraints. When you set the auditFromCache parameter to Automatic , the Gatekeeper operator collects resources from constraints and inserts those resources into your Gatekeeper Config resource. If the resource does not exist, the Gatekeeper operator creates the Config resource. If you set the auditFromCache parameter to Enabled , you need to manually set the Gatekeeper Config resource with the objects to sync to the cache. For more information, see Configuring Audit in the Gatekeeper documentation. To configure the auditFromCache parameter for resource collection from constraints, complete the following steps: Set auditFromCache to Automatic in the Gatekeeper resource. See the following example: apiVersion: operator.gatekeeper.sh/v1alpha1 kind: Gatekeeper metadata: name: gatekeeper spec: audit: replicas: 2 logLevel: DEBUG auditFromCache: Automatic To verify that the resources are added to your Config resource, view that the syncOnly parameter section is added. Run the following command: oc get configs.config.gatekeeper.sh config -n openshift-gatekeeper-system Your Config resource might resemble the following example: apiVersion: config.gatekeeper.sh/v1alpha1 kind: Config metadata: name: config namespace: "openshift-gatekeeper-system" spec: sync: syncOnly: - group: "" version: "v1" kind: "Namespace" - group: "" version: "v1" kind: "Pod" Optional: You can view the explanation of the auditFromCache setting from the description of the Gatekeeper operator custom resource by running the following command: oc explain gatekeeper.spec.audit.auditFromCache 5.3.4. Additional resources For more information, see Configuring Audit in the Gatekeeper documentation. 5.4. Managing the Gatekeeper operator installation policies Use the Red Hat Advanced Cluster Management policy to install the Gatekeeper operator and Gatekeeper on a managed cluster. Required access : Cluster administrator To create, view, and update your Gatekeeper operator installation policies, complete the following sections: Installing Gatekeeper using a Gatekeeper operator policy Creating a Gatekeeper policy from the console Upgrading Gatekeeper and the Gatekeeper operator Disabling Gatekeeper operator policy Deleting Gatekeeper operator policy Uninstalling Gatekeeper constraints, Gatekeeper instance, and Gatekeeper operator policy 5.4.1. Installing Gatekeeper using a Gatekeeper operator policy To install the Gatekeeper operator policy, use the configuration policy controller. During the install, the operator group and subscription pull the Gatekeeper operator to install it on your managed cluster. Then, the policy creates a Gatekeeper custom resource to configure Gatekeeper. The Red Hat Advanced Cluster Management configuration policy controller checks the Gatekeeper operator policy and supports the enforce remediation action. When you set the controller to enforce it automatically creates the Gatekeeper operator objects on the managed cluster. 5.4.2. Creating a Gatekeeper policy from the console When you create a Gatekeeper policy from the console, you must set your remediation enforce to install Gatekeeper. 5.4.2.1. Viewing the Gatekeeper operator policy To view your Gatekeeper operator policy and its status from the console, complete the following steps: Select the policy-gatekeeper-operator policy to view more details. Select the Clusters tab to view the policy violations. 5.4.3. Upgrading Gatekeeper and the Gatekeeper operator You can upgrade the versions for Gatekeeper and the Gatekeeper operator. When you install the Gatekeeper operator with the Gatekeeper operator policy, notice the value for upgradeApproval . The operator upgrades automatically when you set upgradeApproval to Automatic . If you set upgradeApproval to Manual , you must manually approve the upgrade for each cluster where the Gatekeeper operator is installed. 5.4.4. Disabling Gatekeeper operator policy To disable your policy-gatekeeper-operator policy, select the Disable option from the Actions menu in the console, or set spec.disabled: true from the CLI. 5.4.5. Deleting Gatekeeper operator policy To delete your Gatekeeper operator policy from your CLI, complete the following steps: Delete Gatekeeper operator policy by running the following command: oc delete policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace> Verify that you deleted your policy by running the following command: oc get policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace> To delete your Gatekeeper operator policy from the console, click the Actions icon for the policy-gatekeeper-operator policy and select Delete . 5.4.6. Uninstalling Gatekeeper constraints, Gatekeeper instance, and Gatekeeper operator policy To uninstall Gatekeeper policy, complete the steps in the following sections: Removing Gatekeeper constraints Removing Gatekeeper instance Removing Gatekeeper operator 5.4.6.1. Removing Gatekeeper constraints To remove the Gatekeeper constraint and ConstraintTemplate from your managed cluster, complete the following steps: Edit your Gatekeeper constraint or ConstraintTemplate policy. Locate the template that you used to create the Gatekeeper Constraint and ConstraintTemplate . Delete the entries from the list of templates. (Or delete the policy if they're the only templates.) Save and apply the policy. Note: The constraint and ConstraintTemplate are provided directly in the policy-templates instead of within a ConfigurationPolicy . 5.4.6.2. Removing Gatekeeper instance To remove the Gatekeeper instance from your managed cluster, complete the following steps: Edit your Gatekeeper operator policy. Locate the ConfigurationPolicy template that you used to create the Gatekeeper operator custom resource. Change the value for complianceType of the ConfigurationPolicy template to mustnothave . Changing the value deletes the Gatekeeper operator custom resource, signaling to the Gatekeeper operator to clean up the Gatekeeper deployment. 5.4.6.3. Removing Gatekeeper operator To remove the Gatekeeper operator from your managed cluster, complete the following steps: Edit your Gatekeeper operator policy. Locate the OperatorPolicy template that you used to create the Subscription CR. Change the value for complianceType of the OperatorPolicy template to mustnothave . 5.4.7. Additional resources For more details, see the following resources: Integrating Gatekeeper constraints and constraint templates . Policy Gatekeeper . For an explanation of the optional parameters that can be used for the Gatekeeper operator policy, see Gatekeeper Helm Chart . 5.5. Integrating Gatekeeper constraints and constraint templates To create Gatekeeper policies, use ConstraintTemplates and constraints. Add templates and constraints to the policy-templates of a Policy resource. View the following YAML examples that use Gatekeeper constraints in Red Hat Advanced Cluster Management policies: ConstraintTemplates and constraints: Use the Gatekeeper integration feature by using Red Hat Advanced Cluster Management policies for multicluster distribution of Gatekeeper constraints and Gatekeeper audit results aggregation on the hub cluster. The following example defines a Gatekeeper ConstraintTemplate and constraint ( K8sRequiredLabels ) to ensure the gatekeeper label is set on all namespaces: apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: require-gatekeeper-labels-on-ns spec: remediationAction: inform 1 disabled: false policy-templates: - objectDefinition: apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: k8srequiredlabels annotations: policy.open-cluster-management.io/severity: low 2 spec: crd: spec: names: kind: K8sRequiredLabels validation: openAPIV3Schema: properties: labels: type: array items: string targets: - target: admission.k8s.gatekeeper.sh rego: | package k8srequiredlabels violation[{"msg": msg, "details": {"missing_labels": missing}}] { provided := {label | input.review.object.metadata.labels[label]} required := {label | label := input.parameters.labels[_]} missing := required - provided count(missing) > 0 msg := sprintf("you must provide labels: %v", [missing]) } - objectDefinition: apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredLabels metadata: name: ns-must-have-gk annotations: policy.open-cluster-management.io/severity: low 3 spec: enforcementAction: dryrun match: kinds: - apiGroups: [""] kinds: ["Namespace"] parameters: labels: ["gatekeeper"] 1 Since the remediationAction is set to inform , the enforcementAction field of the Gatekeeper constraint is overridden to warn . This means that Gatekeeper detects and warns you about creating or updating a namespace that is missing the gatekeeper label. If the policy remediationAction is set to enforce , the Gatekeeper constraint enforcementAction field is overridden to deny . In this context, this configuration prevents any user from creating or updating a namespace that is missing the gatekeeper label. 2 3 Optional: Set a severity value for the policy.open-cluster-management.io/severity annotation for each Gatekeeper constraint or constraint template. Valid values are the same as for other Red Hat Advanced Cluster Management policy types: low , medium , high , or critical . With the policy, you might receive the following policy status message: warn - you must provide labels: {"gatekeeper"} (on Namespace default); warn - you must provide labels: {"gatekeeper"} (on Namespace gatekeeper-system) . When you delete Gatekeeper constraints or ConstraintTemplates from a policy, the constraints and ConstraintTemplates are also deleted from your managed cluster. To view the Gatekeeper audit results for a specific managed cluster from the console, go to to the policy template Results page. If search is enabled, view the YAML of the Kubernetes objects that failed the audit. Notes: The Related resources section is only available when Gatekeeper generates audit results. The Gatekeeper audit runs every minute by default. Audit results are sent back to the hub cluster to be viewed in the Red Hat Advanced Cluster Management policy status of the managed cluster. policy-gatekeeper-admission : Use the policy-gatekeeper-admission configuration policy within a Red Hat Advanced Cluster Management policy to check for Kubernetes API requests denied by the Gatekeeper admission webhook. View the following example: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-gatekeeper-admission spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Event metadata: namespace: openshift-gatekeeper-system 2 annotations: constraint_action: deny constraint_kind: K8sRequiredLabels constraint_name: ns-must-have-gk event_type: violation 1 The ConfigurationPolicy remediationAction parameter is overwritten by remediationAction in the parent policy. 2 Set to the actual namespace where Gatekeeper is running if it is different. 5.5.1. Additional resources For more details, see the following resources: policy-gatekeeper-operator.yaml What is OPA Gatekeeper? Creating configuration policies Governance | [
"apiVersion: operator.gatekeeper.sh/v1alpha1 kind: Gatekeeper metadata: name: gatekeeper spec: audit: replicas: 1 auditEventsInvolvedNamespace: Enabled 1 logLevel: DEBUG auditInterval: 10s constraintViolationLimit: 55 auditFromCache: Enabled auditChunkSize: 66 emitAuditEvents: Enabled containerArguments: 2 - name: \"\" value: \"\" resources: limits: cpu: 500m memory: 150Mi requests: cpu: 500m memory: 130Mi validatingWebhook: Enabled mutatingWebhook: Enabled webhook: replicas: 3 emitAdmissionEvents: Enabled admissionEventsInvolvedNamespace: Enabled 3 disabledBuiltins: - http.send operations: 4 - \"CREATE\" - \"UPDATE\" - \"CONNECT\" failurePolicy: Fail containerArguments: 5 - name: \"\" value: \"\" resources: limits: cpu: 480m memory: 140Mi requests: cpu: 400m memory: 120Mi nodeSelector: region: \"EMEA\" affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: auditKey: \"auditValue\" topologyKey: topology.kubernetes.io/zone tolerations: - key: \"Example\" operator: \"Exists\" effect: \"NoSchedule\" podAnnotations: some-annotation: \"this is a test\" other-annotation: \"another test\" config: 6 matches: - excludedNamespaces: [\"test-*\", \"my-namespace\"] processes: [\"*\"] disableDefaultMatches: false 7",
"apiVersion: operator.gatekeeper.sh/v1alpha1 kind: Gatekeeper metadata: name: gatekeeper spec: audit: replicas: 2 logLevel: DEBUG auditFromCache: Automatic",
"get configs.config.gatekeeper.sh config -n openshift-gatekeeper-system",
"apiVersion: config.gatekeeper.sh/v1alpha1 kind: Config metadata: name: config namespace: \"openshift-gatekeeper-system\" spec: sync: syncOnly: - group: \"\" version: \"v1\" kind: \"Namespace\" - group: \"\" version: \"v1\" kind: \"Pod\"",
"explain gatekeeper.spec.audit.auditFromCache",
"delete policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace>",
"get policies.policy.open-cluster-management.io <policy-gatekeeper-operator-name> -n <namespace>",
"apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: require-gatekeeper-labels-on-ns spec: remediationAction: inform 1 disabled: false policy-templates: - objectDefinition: apiVersion: templates.gatekeeper.sh/v1beta1 kind: ConstraintTemplate metadata: name: k8srequiredlabels annotations: policy.open-cluster-management.io/severity: low 2 spec: crd: spec: names: kind: K8sRequiredLabels validation: openAPIV3Schema: properties: labels: type: array items: string targets: - target: admission.k8s.gatekeeper.sh rego: | package k8srequiredlabels violation[{\"msg\": msg, \"details\": {\"missing_labels\": missing}}] { provided := {label | input.review.object.metadata.labels[label]} required := {label | label := input.parameters.labels[_]} missing := required - provided count(missing) > 0 msg := sprintf(\"you must provide labels: %v\", [missing]) } - objectDefinition: apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredLabels metadata: name: ns-must-have-gk annotations: policy.open-cluster-management.io/severity: low 3 spec: enforcementAction: dryrun match: kinds: - apiGroups: [\"\"] kinds: [\"Namespace\"] parameters: labels: [\"gatekeeper\"]",
"apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-gatekeeper-admission spec: remediationAction: inform 1 severity: low object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Event metadata: namespace: openshift-gatekeeper-system 2 annotations: constraint_action: deny constraint_kind: K8sRequiredLabels constraint_name: ns-must-have-gk event_type: violation"
] | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/governance/gk-operator-overview |
Chapter 5. RoleBinding [authorization.openshift.io/v1] | Chapter 5. RoleBinding [authorization.openshift.io/v1] Description RoleBinding references a Role, but not contain it. It can reference any Role in the same namespace or in the global namespace. It adds who information via (Users and Groups) OR Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace (excepting the master namespace which has power in all namespaces). Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required subjects roleRef 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources groupNames array (string) GroupNames holds all the groups directly bound to the role. This field should only be specified when supporting legacy clients and servers. See Subjects for further details. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata roleRef ObjectReference RoleRef can only reference the current namespace and the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error. Since Policy is a singleton, this is sufficient knowledge to locate a role. subjects array (ObjectReference) Subjects hold object references to authorize with this rule. This field is ignored if UserNames or GroupNames are specified to support legacy clients and servers. Thus newer clients that do not need to support backwards compatibility should send only fully qualified Subjects and should omit the UserNames and GroupNames fields. Clients that need to support backwards compatibility can use this field to build the UserNames and GroupNames. userNames array (string) UserNames holds all the usernames directly bound to the role. This field should only be specified when supporting legacy clients and servers. See Subjects for further details. 5.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/rolebindings GET : list objects of kind RoleBinding /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings GET : list objects of kind RoleBinding POST : create a RoleBinding /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings/{name} DELETE : delete a RoleBinding GET : read the specified RoleBinding PATCH : partially update the specified RoleBinding PUT : replace the specified RoleBinding 5.2.1. /apis/authorization.openshift.io/v1/rolebindings HTTP method GET Description list objects of kind RoleBinding Table 5.1. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty 5.2.2. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings HTTP method GET Description list objects of kind RoleBinding Table 5.2. HTTP responses HTTP code Reponse body 200 - OK RoleBindingList schema 401 - Unauthorized Empty HTTP method POST Description create a RoleBinding Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body RoleBinding schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 202 - Accepted RoleBinding schema 401 - Unauthorized Empty 5.2.3. /apis/authorization.openshift.io/v1/namespaces/{namespace}/rolebindings/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the RoleBinding HTTP method DELETE Description delete a RoleBinding Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified RoleBinding Table 5.9. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified RoleBinding Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified RoleBinding Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body RoleBinding schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK RoleBinding schema 201 - Created RoleBinding schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/role_apis/rolebinding-authorization-openshift-io-v1 |
7.158. nss-pam-ldapd | 7.158. nss-pam-ldapd 7.158.1. RHBA-2013:0413 - nss-pam-ldapd bug fix update Updated nss-pam-ldapd packages that fix three bugs are now available for Red Hat Enterprise Linux 6. The nss-pam-ldapd packages provides the nss-pam-ldapd daemon (nslcd), which uses a directory server to look up name service information on behalf of a lightweight nsswitch module. Bug Fixes BZ#747281 Prior to this update, the disconnect logic contained a misprint and a failure return value was missing. This update corrects the misprint and adds the missing return value. BZ#769289 Prior to this update, the nslcd daemon performed the idle time expiration check for the LDAP connection before starting an LDAP search operation. On a lossy network or if the LDAP server was under a heavy load, the connection could time out after the successful check and the search operation then failed. With this update, the idle time expiration test is now performed during the LDAP search operation so that the connection now no longer expires under these circumstances. BZ#791042 Prior to this update, when the nslcd daemon requested access to a large group, a buffer provided by the glibc library could not contain such a group and retried again with a larger buffer to process the operation successfully. As a consequence, redundant error messages were logged in the /var/log/message file. This update makes sure that even when glibc provides a buffer that is too small on first attempt in the described scenario, no redundant error messages are returned. All users of nss-pam-ldapd are advised to upgrade to these updated packages, which fix these bugs. 7.158.2. RHSA-2013:0590 - Important: nss-pam-ldapd security update Updated nss-pam-ldapd packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. The nss-pam-ldapd packages provide the nss-pam-ldapd daemon (nslcd), which uses a directory server to lookup name service information on behalf of a lightweight nsswitch module. Security Fix CVE-2013-0288 An array index error, leading to a stack-based buffer overflow flaw, was found in the way nss-pam-ldapd managed open file descriptors. An attacker able to make a process have a large number of open file descriptors and perform name lookups could use this flaw to cause the process to crash or, potentially, execute arbitrary code with the privileges of the user running the process. Red Hat would like to thank Garth Mollett for reporting this issue. All users of nss-pam-ldapd are advised to upgrade to these updated packages, which contain a backported patch to fix this issue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/nss-pam-ldapd |
Chapter 2. What is deployed with AMQ Streams | Chapter 2. What is deployed with AMQ Streams Apache Kafka components are provided for deployment to OpenShift with the AMQ Streams distribution. The Kafka components are generally run as clusters for availability. A typical deployment incorporating Kafka components might include: Kafka cluster of broker nodes ZooKeeper cluster of replicated ZooKeeper instances Kafka Connect cluster for external data connections Kafka MirrorMaker cluster to mirror the Kafka cluster in a secondary cluster Kafka Exporter to extract additional Kafka metrics data for monitoring Kafka Bridge to make HTTP-based requests to the Kafka cluster Not all of these components are mandatory, though you need Kafka and ZooKeeper as a minimum. Some components can be deployed without Kafka, such as MirrorMaker or Kafka Connect. 2.1. Order of deployment The required order of deployment to an OpenShift cluster is as follows: Deploy the Cluster operator to manage your Kafka cluster Deploy the Kafka cluster with the ZooKeeper cluster, and include the Topic Operator and User Operator in the deployment Optionally deploy: The Topic Operator and User Operator standalone if you did not deploy them with the Kafka cluster Kafka Connect Kafka MirrorMaker Kafka Bridge Components for the monitoring of metrics 2.2. Additional deployment configuration options The deployment procedures in this guide describe a deployment using the example installation YAML files provided with AMQ Streams. The procedures highlight any important configuration considerations, but they do not describe all the configuration options available. You can use custom resources to refine your deployment. You may wish to review the configuration options available for Kafka components before you deploy AMQ Streams. For more information on the configuration through custom resources, see Deployment configuration in the Using AMQ Streams on OpenShift guide. 2.2.1. Securing Kafka On deployment, the Cluster Operator automatically sets up TLS certificates for data encryption and authentication within your cluster. AMQ Streams provides additional configuration options for encryption , authentication and authorization , which are described in the Using AMQ Streams on OpenShift guide: Secure data exchange between the Kafka cluster and clients by configuration of Kafka resources . Configure your deployment to use an authorization server to provide OAuth 2.0 authentication and OAuth 2.0 authorization . Secure Kafka using your own certificates . 2.2.2. Monitoring your deployment AMQ Streams supports additional deployment options to monitor your deployment. Extract metrics and monitor Kafka components by deploying Prometheus and Grafana with your Kafka cluster . Extract additional metrics, particularly related to monitoring consumer lag, by deploying Kafka Exporter with your Kafka cluster . Track messages end-to-end by setting up distributed tracing , as described in the Using AMQ Streams on OpenShift guide. | null | https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_and_upgrading_amq_streams_on_openshift/deploy-options_str |
23.2. The Text Mode Installation Program User Interface | 23.2. The Text Mode Installation Program User Interface While text mode installations are not explicitly documented, those using the text mode installation program can easily follow the GUI installation instructions. However, because text mode presents you with a simpler, more streamlined installation process, certain options that are available in graphical mode are not also available in text mode. These differences are noted in the description of the installation process in this guide, and include: Interactively activating FCP LUNs configuring advanced storage methods such as LVM, RAID, FCoE, zFCP, and iSCSI. customizing the partition layout customizing the bootloader layout selecting packages during installation configuring the installed system with firstboot | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch23s02 |
2.6. DML Clauses | 2.6. DML Clauses 2.6.1. DML Clauses DML clauses are used in various SQL commands (see Section 2.5.1, "DML Commands" ) to specify particular relations and how to present them. Nearly all these features follow standard SQL syntax and functionality, so any SQL reference can be used for more information. 2.6.2. WITH Clause JBoss Data Virtualization supports non-recursive common table expressions via the WITH clause. WITH clause items may be referenced as tables in subsequent WITH clause items and in the main query. The WITH clause can be thought of as providing query-scoped temporary tables. Usage: Syntax Rules: All of the projected column names must be unique. If they are not unique, then the column name list must be provided. If the columns of the WITH clause item are declared, then they must match the number of columns projected by the query expression. Each WITH clause item must have a unique name. Note The WITH clause is also subject to optimization and its entries may not be processed if they are not needed in the subsequent query. 2.6.3. Recursive Common Table Expressions A recursive common table expression is a special form of a common table expression that is allowed to refer to itself to build the full common table result in a recursive or iterative fashion. The recursive query expression is allowed to refer to the common table by name. Processing flows with the anchor query expression executed first. The results are added to the common table and are referenced for the execution of the recursive query expression. The process is repeated against the new results until there are no more intermediate results. Important A non-terminating recursive common table expression can lead to excessive processing. To prevent runaway processing of a recursive common table expression, processing is by default limited to 10000 iterations. Recursive common table expressions that are pushed down are not subject to this limit, but may be subject to other source-specific limits. The limit can be modified by setting the session variable teiid.maxRecusion to a larger integer value. Once the maximum has been exceeded, an exception is thrown. This fails because the recursion limit is reached before processing completes: 2.6.4. SELECT Clause SQL queries start with the SELECT keyword and are often referred to as "SELECT statements". JBoss Data Virtualization supports most of the standard SQL query constructs. Usage: Syntax Rules: Aliased expressions are only used as the output column names and in the ORDER BY clause. They cannot be used in other clauses of the query. DISTINCT may only be specified if the SELECT symbols are comparable. 2.6.5. FROM Clause The FROM clause specifies the target table(s) for SELECT, UPDATE, and DELETE statements. Example Syntax: FROM table [[AS] alias] FROM table1 [INNER|LEFT OUTER|RIGHT OUTER|FULL OUTER] JOIN table2 ON join-criteria FROM table1 CROSS JOIN table2 FROM (subquery) [AS] alias FROM TABLE(subquery) [AS] alias Note See Section 2.6.7, "Nested Tables" . FROM table1 JOIN /*+ MAKEDEP */ table2 ON join-criteria FROM table1 JOIN /*+ MAKENOTDEP */ table2 ON join-criteria FROM /*+ MAKEIND */ table1 JOIN table2 ON join-criteria FROM /*+ NO_UNNEST */ vw1 JOIN table2 ON join-criteria FROM table1 left outer join /*+ optional */ table2 ON join-criteria Note See Section 2.5.10, "Subqueries" . FROM TEXTTABLE... Note See Section 2.6.8, "Nested Tables: TEXTTABLE" . FROM XMLTABLE... Note See Section 2.6.9, "Nested Tables: XMLTABLE" . FROM ARRAYTABLE... Note See Section 2.6.10, "Nested Tables: ARRAYTABLE" . FROM OBJECTTABLE... Note See Section 2.6.11, "Nested Tables: OBJECTTABLE" . FROM (SELECT ...) Note See Section 2.5.10, "Subqueries" . 2.6.6. FROM Clause Hints From clause hints are typically specified in a comment block. If multiple hints apply, they should be placed in the same comment block. For example: Dependent Joins Hints MAKEIND, MAKEDEP, and MAKENOTDEP are hints used to control dependent join behavior (see Section 13.7.3, "Dependent Joins" ). They should only be used in situations where the optimizer does not choose the most optimal plan based upon query structure, metadata, and costing information. The hints may appear in a comment following the FROM keyword. The hints can be specified against any FROM clause, not just a named table. NO_UNNEST NO_UNNEST can be specified against a FROM clause or view to instruct the planner not to merge the nested SQL in the surrounding query - also known as view flattening. This hint only applies to JBoss Data Virtualization planning and is not passed to source queries. NO_UNNEST may appear in a comment following the FROM keyword. PRESERVE The PRESERVE hint can be used against an ANSI join tree to preserve the structure of the join rather than allowing the JBoss Data Virtualization optimizer to reorder the join. This is similar in function to the Oracle ORDERED or MySQL STRAIGHT_JOIN hints. 2.6.7. Nested Tables Nested tables may appear in the FROM clause with the TABLE keyword. They are an alternative to using a view with normal join semantics. The columns projected from the command contained in the nested table may be used just as any of the other FROM clause projected columns in join criteria, the where clause, etc. A nested table may have correlated references to preceding FROM clause column references as long as INNER and LEFT OUTER joins are used. This is especially useful in cases where the nested expression is a procedure or function call. Valid example: select * from t1, TABLE(call proc(t1.x)) t2 Invalid example, since t1 appears after the nested table in the FROM clause: select * from TABLE(call proc(t1.x)) t2, t1 Note The usage of a correlated nested table may result in multiple executions of the table expression - once for each correlated row. 2.6.8. Nested Tables: TEXTTABLE The TEXTTABLE function processes character input to produce tabular output. It supports both fixed and delimited file format parsing. The function itself defines what columns it projects. The TEXTTABLE function is implicitly a nested table and may be used within FROM clauses. Parameters expression is the text content to process, which should be convertible to CLOB. SELECTOR specifies that delimited lines should only match if the line begins with the selector string followed by a delimiter. The selector value is a valid column value. If a TEXTTABLE SELECTOR is specified, a SELECTOR may also be specified for column values. A column SELECTOR argument will select the nearest preceding text line with the given SELECTOR prefix and select the value at the given 1-based integer position (which includes the selector itself). If no such text line or position with a given line exists, a null value will be produced. NO ROW DELIMITER indicates that fixed parsing should not assume the presence of newline row delimiters. DELIMITER sets the field delimiter character to use. Defaults to ','. QUOTE sets the quote, or qualifier, character used to wrap field values. Defaults to '"'. ESCAPE sets the escape character to use if no quoting character is in use. This is used in situations where the delimiter or new line characters are escaped with a preceding character, e.g. \, HEADER specifies the text line number (counting every new line) on which the column names occur. All lines prior to the header will be skipped. If HEADER is specified, then the header line will be used to determine the TEXTTABLE column position by case-insensitive name matching. This is especially useful in situations where only a subset of the columns are needed. If the HEADER value is not specified, it defaults to 1. If HEADER is not specified, then columns are expected to match positionally with the text contents. SKIP specifies the number of text lines (counting every new line) to skip before parsing the contents. You can still specify a HEADER with SKIP. A FOR ORDINALITY column is typed as integer and will return the 1-based item number as its value. WIDTH indicates the fixed-width length of a column in characters - not bytes. The CR NL newline value counts as a single character. NO TRIM specifies that the text value should not be trimmed of all leading and trailing white space. Syntax Rules: If width is specified for one column it must be specified for all columns and be a non-negative integer. If width is specified, then fixed width parsing is used and ESCAPE, QUOTE, and HEADER should not be specified. If width is not specified, then NO ROW DELIMITER cannot be used. The column names must not contain duplicates. Examples Use of the HEADER parameter, returns 1 row ['b']: SELECT * FROM TEXTTABLE(UNESCAPE('col1,col2,col3\na,b,c') COLUMNS col2 string HEADER) x Use of fixed width, returns 2 rows ['a', 'b', 'c'], ['d', 'e', 'f']: SELECT * FROM TEXTTABLE(UNESCAPE('abc\ndef') COLUMNS col1 string width 1, col2 string width 1, col3 string width 1) x Use of fixed width without a row delimiter, returns 3 rows ['a'], ['b'], ['c']: SELECT * FROM TEXTTABLE('abc' COLUMNS col1 string width 1 NO ROW DELIMITER) x Use of ESCAPE parameter, returns 1 row ['a,', 'b']: SELECT * FROM TEXTTABLE('a:,,b' COLUMNS col1 string, col2 string ESCAPE ':') x As a nested table: SELECT x.* FROM t, TEXTTABLE(t.clobcolumn COLUMNS first string, second date SKIP 1) x Use of SELECTOR, returns 2 rows ['c', 'd', 'b'], ['c', 'f', 'b']: 2.6.9. Nested Tables: XMLTABLE The XMLTABLE function uses XQuery to produce tabular output. The XMLTABLE function is implicitly a nested table and may be used within FROM clauses. XMLTABLE is part of the SQL/XML 2006 specification. Usage: See XMLELEMENT for the definition of NSP - XMLNAMESPACES. See XMLQUERY for the definition of PASSING. Note See also XQuery Optimization. Parameters The optional XMLNAMESPACES clause specifies the namespaces for use in the XQuery and COLUMN path expressions. The xquery-expression must be a valid XQuery. Each sequence item returned by the xquery will be used to create a row of values as defined by the COLUMNS clause. If COLUMNS is not specified, then that is the same as having the COLUMNS clause: "COLUMNS OBJECT_VALUE XML PATH '.'", which returns the entire item as an XML value. A FOR ORDINALITY column is typed as integer and will return the one-based item number as its value. Each non-ordinality column specifies a type and optionally a PATH and a DEFAULT expression. If PATH is not specified, then the path will be the same as the column name. Syntax Rules: Only 1 FOR ORDINALITY column may be specified. The columns names must not contain duplicates. The blob data type is supported, but there is only built-in support for xs:hexBinary values. For xs:base64Binary, use a workaround of a PATH that uses the explicit value constructor "xs:base64Binary(<path>)". Examples Use of passing, returns 1 row [1]: select * from xmltable('/a' PASSING xmlparse(document '<a id="1"/>') COLUMNS id integer PATH '@id') x As a nested table: select x.* from t, xmltable('/x/y' PASSING t.doc COLUMNS first string, second FOR ORDINALITY) x 2.6.10. Nested Tables: ARRAYTABLE The ARRAYTABLE function processes an array input to produce tabular output. The function itself defines what columns it projects. The ARRAYTABLE function is implicitly a nested table and may be used within FROM clauses. Usage: ARRAYTABLE(expression COLUMNS <COLUMN>, ...) AS name COLUMN := name datatype Parameters expression - the array to process, which should be a java.sql.Array or java array value. Syntax Rules: The columns names must not contain duplicates. Examples As a nested table: ARRAYTABLE is effectively a shortcut for using the array_get function (see Section 2.4.19, "Miscellaneous Functions" ) in a nested table. For example: is the same as 2.6.11. Nested Tables: OBJECTTABLE The OBJECTTABLE function processes an object input to produce tabular output. The function itself defines what columns it projects. The OBJECTTABLE function is implicitly a nested table and may be correlated to preceding FROM clause entries. Usage: Parameters lang - an optional string literal that is the case sensitive language name of the scripts to be processed. The script engine must be available via a JSR-223 ScriptEngineManager lookup. In some instances this may mean making additional modules available to your VDB, which can be done via the same process as adding modules/libraries for UDFs (see Non-Pushdown Support for User-Defined Functions in the Development Guide: Server Development ). If a LANGUAGE is not specified, the default of 'teiid_script' (see below) will be used. name - an identifier that will bind the val expression value into the script context. rowScript is a string literal specifying the script to create the row values. For each non-null item the Iterator produces the columns will be evaluated. colName/colType are the id/data type of the column, which can optionally be defaulted with the DEFAULT clause expression defaultExpr. colScript is a string literal specifying the script that evaluates to the column value. Syntax Rules: The column names must be not contain duplicates. JBoss Data Virtualization will place several special variables in the script execution context. The CommandContext is available as teiid_context. Additionally the colScripts may access teiid_row and teiid_row_number. teiid_row is the current row object produced by the row script. teiid_row_number is the current 1-based row number. rowScript is evaluated to an Iterator. If the results is already an Iterator, it is used directly. If the evaluation result is an Iteratable, then an Iterator will be obtained. Any other Object will be treated as an Iterator of a single item). In all cases null row values will be skipped. Note While there is no restriction what can be used as a PASSING variable names you should choose names that can be referenced as identifiers in the target language. Examples Accessing special variables: The result would be a row with two columns containing the user name and 1 respectively. Note Due to their mostly unrestricted access to Java functionality, usage of languages other than teiid_script is restricted by default. A VDB must declare all allowable languages by name in the allowed-languages VDB property (see Section 6.1, "VDB Definition" ) using a comma separated list. The names are case sensitive names and should be separated without whitespace. Without this property it is not possible to use OBJECTTABLE even from within view definitions that are not subject to normal permission checks. Data Roles are also secured with User Query Permissions. teiid_script teiid_script is a simple scripting expression language that allows access to passing and special variables as well as any non-void 0-argument methods on objects. A teiid_script expression begins by referencing the passing or special variable. Then any number of .method accessors may be chained to evaluate the expression to a different value. Methods may be accessed by their property names, for example foo rather than getFoo. If the object both a getFoo() and foo() method, then the accessor foo references foo() and getFoo should be used to call the getter. teiid_script is effectively dynamically typed as typing is performed at runtime. If a accessor does not exist on the object or if the method is not accessible, then an exception will be raised. Examples To get the VDB description string: 2.6.12. WHERE Clause The WHERE clause defines the criteria to limit the records affected by SELECT, UPDATE, and DELETE statements. Usage: See Also: Section 2.3.11, "Criteria" 2.6.13. GROUP BY Clause The GROUP BY clause denotes that rows should be grouped according to the specified expression values. One row will be returned for each group, after optionally filtering those aggregate rows based on a HAVING clause. Usage: Syntax Rules: Column references in the GROUP BY clause must be unaliased output columns. Expressions used in the GROUP BY clause must appear in the SELECT clause. Column references and expressions in the SELECT clause that are not used in the GROUP BY clause must appear in aggregate functions. If an aggregate function is used in the SELECT clause and no GROUP BY is specified, an implicit GROUP BY will be performed with the entire result set as a single group. In this case, every column in the SELECT must be an aggregate function as no other column value will be fixed across the entire group. The group by columns must be of a comparable type. Just like normal grouping, rollup processing logically occurs before the HAVING clause is processed. A ROLLUP of expressions will produce the same output as a regular grouping with the addition of aggregate values computed at higher aggregation levels. For N expressions in the ROLLUP , aggregates will be provided over (), (expr1), (expr1, expr2) and so on, up to (expr1, ... exprN-1) with the other grouping expressions in the output as null values. Here is an example using the normal aggregation query: This is what is returned: Table 2.10. Returned Data Country City Amount US St Louis 10000 US Raleigh 150000 US Denver 20000 UK Birmingham 50000 UK London 75000 In contrast, here is the rollup query: This is what it returns: Table 2.11. Returned Data from Rollup Country City Amount US St Louis 10000 US Raleigh 150000 US Denver 20000 US Null 180000 UK Birmingham 50000 UK London 75000 UK Null 125000 Note Not all sources support ROLLUPs and some optimizations compared to normal aggregate processing may be inhibited by the use of a ROLLUP. Note Support for ROLLUPs in Red Hat JBoss Data Virtualization is currently limited, compared to the SQL specification. 2.6.14. HAVING Clause The HAVING clause operates exactly as a WHERE clause although it operates on the output of a GROUP BY. It supports the same syntax as the WHERE clause. Syntax Rules: Expressions used in the GROUP BY clause must either contain an aggregate function: COUNT, AVG, SUM, MIN, MAX. or be one of the grouping expressions. 2.6.15. ORDER BY Clause The ORDER BY clause specifies how records should be sorted. The options are ASC (ascending) and DESC (descending). Usage: Syntax Rules: Sort columns may be specified positionally by a 1-based positional integer, by SELECT clause alias name, by SELECT clause expression, or by an unrelated expression. Column references may appear in the SELECT clause as the expression for an aliased column or may reference columns from tables in the FROM clause. If the column reference is not in the SELECT clause the query must not be a set operation, specify SELECT DISTINCT, or contain a GROUP BY clause. Unrelated expressions, expressions not appearing as an aliased expression in the SELECT clause, are allowed in the ORDER BY clause of a non-set QUERY. The columns referenced in the expression must come from the FROM clause table references. The column references cannot be to alias names or positional. The ORDER BY columns must be of a comparable type. If an ORDER BY is used in an inline view or view definition without a LIMIT clause, it will be removed by the JBoss Data Virtualization optimizer. If NULLS FIRST/LAST is specified, then nulls are guaranteed to be sorted either first or last. If the null ordering is not specified, then results will typically be sorted with nulls as low values, which is the JBoss Data Virtualization internal default sorting behavior. However not all sources return results with nulls sorted as low values by default, and JBoss Data Virtualization may return results with different null orderings. Warning The use of positional ordering is no longer supported by the ANSI SQL standard and is a deprecated feature in JBoss Data Virtualization. It is preferable to use alias names in the ORDER BY clause. 2.6.16. LIMIT Clause The LIMIT clause specifies a limit on the number of records returned from the SELECT command. An optional offset (the number of rows to skip) can be specified. The LIMIT clause can also be specified using the SQL 2008 OFFSET/FETCH FIRST clauses. If an ORDER BY is also specified, it will be applied before the OFFSET/LIMIT are applied. If an ORDER BY is not specified there is generally no guarantee what subset of rows will be returned. Usage: Syntax Rules: The limit/offset expressions must be a non-negative integer or a parameter reference (?). An offset of 0 is ignored. A limit of 0 will return no rows. The terms FIRST/ are interchangeable as well as ROW/ROWS. The LIMIT clause may take an optional preceding NON_STRICT hint to indicate that push operations should not be inhibited even if the results will not be consistent with the logical application of the limit. The hint is only needed on unordered limits, e.g. "SELECT * FROM VW /*+ NON_STRICT */ LIMIT 2". Examples: LIMIT 100 - returns the first 100 records (rows 1-100) LIMIT 500, 100 - skips 500 records and returns the 100 records (rows 501-600) OFFSET 500 ROWS - skips 500 records OFFSET 500 ROWS FETCH 100 ROWS ONLY - skips 500 records and returns the 100 records (rows 501-600) FETCH FIRST ROW ONLY - returns only the first record 2.6.17. INTO Clause Warning Usage of the INTO Clause for inserting into a table has been been deprecated. An INSERT with a query command should be used instead. Refer to Section 2.5.3, "INSERT Command" . 2.6.18. OPTION Clause The OPTION keyword denotes options the user can pass in with the command. These options are specific to JBoss Data Virtualization and not covered by any SQL specification. Usage: Supported options: MAKEDEP table [(,table)*] - specifies source tables that will be made dependent in the join MAKENOTDEP table [(,table)*] - prevents a dependent join from being used NOCACHE [table (,table)*] - prevents cache from being used for all tables or for the given tables Examples: OPTION MAKEDEP table1 OPTION NOCACHE All tables specified in the OPTION clause should be fully qualified, however the name may match either an alias name or the fully qualified name. Note versions of JBoss Data Virtualization accepted the PLANONLY, DEBUG, and SHOWPLAN option arguments. These are no longer accepted in the OPTION clause. See Red Hat JBoss Data Virtualization Development Guide: Client Development for replacements to those options. | [
"WITH name [(column, ...)] AS (query expression)",
"WITH name [(column, ...)] AS (anchor query expression UNION [ALL] recursive query expression)",
"SELECT teiid_session_set('teiid.maxRecursion', 25); WITH n (x) AS (values('a') UNION select chr(ascii(x)+1) from n where x < 'z') select * from n",
"SELECT [DISTINCT|ALL] ((expression [[AS] name])|(group identifier.STAR))*|STAR",
"FROM /*+ MAKEDEP PRESERVE */ (tbl1 inner join tbl2 inner join tbl3 on tbl2.col1 = tbl3.col1 on tbl1.col1 = tbl2.col1), tbl3 WHERE tbl1.col1 = tbl2.col1",
"FROM /*+ PRESERVE */ (tbl1 inner join tbl2 inner join tbl3 on tbl2.col1 = tbl3.col1 on tbl1.col1 = tbl2.col1)",
"select * from t1, TABLE(call proc(t1.x)) t2",
"select * from TABLE(call proc(t1.x)) t2, t1",
"TEXTTABLE(expression [SELECTOR string] COLUMNS <COLUMN>, ... [NO ROW DELIMITER] [DELIMITER char] [(QUOTE|ESCAPE) char] [HEADER [integer]] [SKIP integer]) AS name",
"COLUMN := name (FOR ORDINALITY | ([HEADER string] datatype [WIDTH integer [NO TRIM]] [SELECTOR string integer]))",
"SELECT * FROM TEXTTABLE(UNESCAPE('col1,col2,col3\\na,b,c') COLUMNS col2 string HEADER) x",
"SELECT * FROM TEXTTABLE(UNESCAPE('abc\\ndef') COLUMNS col1 string width 1, col2 string width 1, col3 string width 1) x",
"SELECT * FROM TEXTTABLE('abc' COLUMNS col1 string width 1 NO ROW DELIMITER) x",
"SELECT * FROM TEXTTABLE('a:,,b' COLUMNS col1 string, col2 string ESCAPE ':') x",
"SELECT x.* FROM t, TEXTTABLE(t.clobcolumn COLUMNS first string, second date SKIP 1) x",
"SELECT * FROM TEXTTABLE('a,b\\nc,d\\nc,f' SELECTOR 'c' COLUMNS col1 string, col2 string col3 string SELECTOR 'a' 2) x",
"XMLTABLE([<NSP>,] xquery-expression [<PASSING>] [COLUMNS <COLUMN>, ... )] AS name",
"COLUMN := name (FOR ORDINALITY | (datatype [DEFAULT expression] [PATH string]))",
"select * from xmltable('/a' PASSING xmlparse(document '<a id=\"1\"/>') COLUMNS id integer PATH '@id') x",
"select x.* from t, xmltable('/x/y' PASSING t.doc COLUMNS first string, second FOR ORDINALITY) x",
"select x.* from (call source.invokeMDX('some query')) r, arraytable(r.tuple COLUMNS first string, second bigdecimal) x",
"ARRAYTABLE(val COLUMNS col1 string, col2 integer) AS X",
"TABLE(SELECT cast(array_get(val, 1) AS string) AS col1, cast(array_get(val, 2) AS integer) AS col2) AS X",
"OBJECTTABLE([LANGUAGE lang] rowScript [PASSING val AS name ...] COLUMNS colName colType colScript [DEFAULT defaultExpr] ...) AS id",
"SELECT x.* FROM OBJECTTABLE('teiid_context' COLUMNS \"user\" string 'teiid_row.userName', row_number integer 'teiid_row_number') AS x",
"teiid_context.session.vdb.description",
"WHERE criteria",
"GROUP BY expression (,expression)*",
"SELECT country, city, sum(amount) from sales group by country, city",
"SELECT country, city, sum(amount) from sales group by rollup(country, city)",
"ORDER BY expression [ASC|DESC] [NULLS (FIRST|LAST)],",
"LIMIT [offset,] limit",
"[OFFSET offset ROW|ROWS] [FETCH FIRST|NEXT [limit] ROW|ROWS ONLY",
"OPTION option, (option)*"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_3_reference_material/sect-dml_clauses |
Appendix B. Revision History | Appendix B. Revision History Revision History Revision 1.0-5.400 2013-10-31 Rudiger Landmann Rebuild with publican 4.0.0 Revision 1.0-5 2012-07-18 Anthony Towns Rebuild for Publican 3.0 Revision 1.0-0 Fri Apr 24 2009 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/appe-Publican-Revision_History |
Chapter 9. Storage | Chapter 9. Storage LVM support for (non-clustered) thinly-provisioned snapshots A new implementation of LVM copy-on-write (cow) snapshots is available in Red Hat Enterprise Linux 6.3 as a Technology Preview. The main advantage of this implementation, compared to the implementation of snapshots, is that it allows many virtual devices to be stored on the same data volume. This implementation also provides support for arbitrary depth of recursive snapshots (snapshots of snapshots of snapshots ...). This feature is for use on a single-system. It is not available for multi-system access in cluster environments. For more information, refer to documentation of the -s/--snapshot option in the lvcreate man page. LVM support for (non-clustered) thinly-provisioned LVs Logical Volumes (LVs) can now be thinly provisioned to manage a storage pool of free space to be allocated to an arbitrary number of devices when needed by applications. This allows creation of devices that can be bound to a thinly provisioned pool for late allocation when an application actually writes to the LV. The thinly-provisioned pool can be expanded dynamically if and when needed for cost-effective allocation of storage space. In Red Hat Enterprise Linux 6.3, this feature is introduced as a Technology Preview. You must have the device-mapper-persistent-data package installed to try out this feature. For more information, refer to the lvcreate man page. Dynamic aggregation of LVM metadata via lvmetad Most LVM commands require an accurate view of the LVM metadata stored on the disk devices on the system. With the current LVM design, if this information is not available, LVM must scan all the physical disk devices in the system. This requires a significant amount of I/O operations in systems that have a large number of disks. The purpose of the lvmetad daemon is to eliminate the need for this scanning by dynamically aggregating metadata information each time the status of a device changes. These events are signaled to lvmetad by udev rules. If lvmetad is not running, LVM performs a scan as it normally would. This feature is provided as a Technology Preview and is disabled by default in Red Hat Enterprise Linux 6.3. To enable it, refer to the use_lvmetad parameter in the /etc/lvm/lvm.conf file, and enable the lvmetad daemon by configuring the lvm2-lvmetad init script. Fiber Channel over Ethernet (FCoE) target mode fully supported Fiber Channel over Ethernet (FCoE) target mode is fully supported in Red Hat Enterprise Linux 6.3. This kernel feature is configurable via the targetcli utility, supplied by the fcoe-target-utils package. FCoE is designed to be used on a network supporting Data Center Bridging (DCB). Further details are available in the dcbtool(8) and targetcli(8) man pages (provided by the lldpad and fcoe-target-utils packages, respectively). LVM RAID fully supported with the exception of RAID logical volumes in HA-LVM The expanded RAID support in LVM is now fully supported in Red Hat Enterprise Linux 6.3. LVM is now capable of creating RAID 4/5/6 logical volumes and supports mirroring of these logical volumes. The MD (software RAID) modules provide the backend support for these new features. Activating volumes in read-only mode A new LVM configuration file parameter, activation/read_only_volume_list , makes it possible to always activate particular volumes in read-only mode, regardless of the actual permissions on the volumes concerned. This parameter overrides the --permission rw option stored in the metadata. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_release_notes/storage |
Chapter 40. MapStruct | Chapter 40. MapStruct Since Camel 3.19 Only producer is supported . The camel-mapstruct component is used for converting POJOs using . 40.1. URI format Where className is the fully qualified class name of the POJO to convert to. 40.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 40.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 40.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 40.3. Component Options The MapStruct component supports 4 options, which are listed below. Name Description Default Type lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean mapperPackageName (producer) Required Package name(s) where Camel should discover Mapstruct mapping classes. Multiple package names can be separated by comma. String autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean mapStructConverter (advanced) Autowired To use a custom MapStructConverter such as adapting to a special runtime. MapStructMapperFinder 40.4. Endpoint Options The MapStruct endpoint is configured using URI syntax: with the following path and query parameters: 40.4.1. Path Parameters (1 parameters) Name Description Default Type className (producer) Required The fully qualified class name of the POJO that mapstruct should convert to (target). String 40.4.2. Query Parameters (2 parameters) Name Description Default Type mandatory (producer) Whether there must exist a mapstruct converter to convert to the POJO. true boolean lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean 40.5. Setting up MapStruct The camel-mapstruct component must be configured with one or more package names, for classpath scanning MapStruct Mapper classes. This is needed because the Mapper classes are to be used for converting POJOs with MapStruct. For example, to set up two packages you can do as following: MapstructComponent mc = context.getComponent("mapstruct", MapstructComponent.class); mc.setMapperPackageName("com.foo.mapper,com.bar.mapper"); This can also be configured in application.properties : camel.component.mapstruct.mapper-package-name = com.foo.mapper,com.bar.mapper Camel will on startup scan these packages for classes which names ends with Mapper . These classes are then introspected to discover the mapping methods. These mapping methods are then registered into the Camel registry. This means that you can also use type converter to convert the POJOs with MapStruct, such as: from("direct:foo") .convertBodyTo(MyFooDto.class); Where MyFooDto is a POJO that MapStruct is able to convert to/from. 40.6. Spring Boot Auto-Configuration When using mapstruct with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mapstruct-starter</artifactId> </dependency> The component supports 5 options, which are listed below. Name Description Default Type camel.component.mapstruct.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mapstruct.enabled Whether to enable auto configuration of the mapstruct component. This is enabled by default. Boolean camel.component.mapstruct.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mapstruct.map-struct-converter To use a custom MapStructConverter such as adapting to a special runtime. The option is a org.apache.camel.component.mapstruct.MapStructMapperFinder type. MapStructMapperFinder camel.component.mapstruct.mapper-package-name Package name(s) where Camel should discover Mapstruct mapping classes. Multiple package names can be separated by comma. String | [
"mapstruct:className[?options]",
"mapstruct:className",
"MapstructComponent mc = context.getComponent(\"mapstruct\", MapstructComponent.class); mc.setMapperPackageName(\"com.foo.mapper,com.bar.mapper\");",
"camel.component.mapstruct.mapper-package-name = com.foo.mapper,com.bar.mapper",
"from(\"direct:foo\") .convertBodyTo(MyFooDto.class);",
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mapstruct-starter</artifactId> </dependency>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-mapstruct-component-starter |
Chapter 1. Red Hat Quay builds overview | Chapter 1. Red Hat Quay builds overview Red Hat Quay builds , or just builds , are a feature that enable the automation of container image builds. The builds feature uses worker nodes to build images from Dockerfiles or other build specifications. These builds can be triggered manually or automatically via webhooks from repositories like GitHub, allowing users to integrate continuous integration (CI) and continuous delivery (CD) pipelines into their workflow. The builds feature is supported on Red Hat Quay on OpenShift Container Platform and Kubernetes clusters. For Operator-based deployments and Kubernetes clusters, builds are created by using a build manager that coordinates and handles the build jobs. Builds support building Dockerfile on both bare metal platforms and on virtualized platforms with virtual builders . This versatility allows organizations to adapt to existing infrastructure while leveraging Red Hat Quay's container image build capabilities. The key features of Red Hat Quay builds feature include: Automated builds triggered by code commits or version control events Support for Docker and Podman container images Fine-grained control over build environments and resources Integration with Kubernetes and OpenShift Container Platform for scalable builds Compatibility with bare metal and virtualized infrastructure Note Running builds directly in a container on bare metal platforms does not have the same isolation as when using virtual machines, however, it still provides good protection. Builds are highly complex, and administrators are encouraged to review the Build automation architecture guide before continuing. 1.1. Building container images Building container images involves creating a blueprint for a containerized application. Blueprints rely on base images from other public repositories that define how the application should be installed and configured. Red Hat Quay supports the ability to build Docker and Podman container images. This functionality is valuable for developers and organizations who rely on container and container orchestration. 1.1.1. Build contexts When building an image with Docker or Podman, a directory is specified to become the build context . This is true for both manual Builds and Build triggers, because the Build that is created by is not different than running docker build or podman build on your local machine. Build contexts are always specified in the subdirectory from the Build setup, and fallback to the root of the Build source if a directory is not specified. When a build is triggered, Build workers clone the Git repository to the worker machine, and then enter the Build context before conducting a Build. For Builds based on .tar archives, Build workers extract the archive and enter the Build context. For example: Extracted Build archive example ├── .git ├── Dockerfile ├── file └── subdir └── Dockerfile Imagine that the Extracted Build archive is the directory structure got a Github repository called example. If no subdirectory is specified in the Build trigger setup, or when manually starting the Build, the Build operates in the example directory. If a subdirectory is specified in the Build trigger setup, for example, subdir , only the Dockerfile within it is visible to the Build. This means that you cannot use the ADD command in the Dockerfile to add file , because it is outside of the Build context. Unlike Docker Hub, the Dockerfile is part of the Build context on As a result, it must not appear in the .dockerignore file. | [
"example ├── .git ├── Dockerfile ├── file └── subdir └── Dockerfile"
] | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/builders_and_image_automation/builds-overview |
Chapter 4. Adding Fuse users to JBoss EAP | Chapter 4. Adding Fuse users to JBoss EAP Run the JBoss EAP add-user script to add Fuse users to JBoss EAP. Prerequisites JBoss EAP is running. Procedure Navigate to EAP_HOME/bin . Run the add-user script. For example: ./add-user.sh Respond to the prompts to create a new user: Management User is a Fuse administrative user on JBoss EAP. Application User is a Fuse non-administrative user on JBoss EAP. | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/installing_on_jboss_eap/adding-users-to-jboss-eap |
Configuring Red Hat build of OpenJDK 17 on RHEL with FIPS | Configuring Red Hat build of OpenJDK 17 on RHEL with FIPS Red Hat build of OpenJDK 17 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/configuring_red_hat_build_of_openjdk_17_on_rhel_with_fips/index |
10.13. Debug Logging for Distributed Lock Manager (DLM) Needs to be Enabled | 10.13. Debug Logging for Distributed Lock Manager (DLM) Needs to be Enabled There are two debug options for the Distributed Lock Manager (DLM) that you can enable, if necessary: DLM kernel debugging, and POSIX lock debugging. To enable DLM debugging, edit the /etc/cluster/cluster.conf file to add configuration options to the dlm tag. The log_debug option enables DLM kernel debugging messages, and the plock_debug option enables POSIX lock debugging messages. The following example section of a /etc/cluster/cluster.conf file shows the dlm tag that enables both DLM debug options: After editing the /etc/cluster/cluster.conf file, run the cman_tool version -r command to propagate the configuration to the rest of the cluster nodes. | [
"<cluster config_version=\"42\" name=\"cluster1\"> <dlm log_debug=\"1\" plock_debug=\"1\"/> </cluster>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-dlm-debug-ca |
Chapter 2. Installing .NET 6.0 | Chapter 2. Installing .NET 6.0 .NET 6.0 is included in the AppStream repositories for RHEL 9. The AppStream repositories are enabled by default on RHEL 9 systems. You can install the .NET 6.0 runtime with the latest 6.0 Software Development Kit (SDK). When a newer SDK becomes available for .NET 6.0, you can install it by running sudo yum upgrade . Prerequisites Installed and registered RHEL 9 with attached subscriptions. For more information, see Performing a standard RHEL installation . Procedure Install .NET 6.0 and all of its dependencies: Verification steps Verify the installation: The output returns the relevant information about the .NET installation and the environment. | [
"sudo yum install dotnet-sdk-6.0 -y",
"dotnet --info"
] | https://docs.redhat.com/en/documentation/net/6.0/html/getting_started_with_.net_on_rhel_9/installing-dotnet_getting-started-with-dotnet-on-rhel-9 |
Chapter 28. Kubernetes NMState | Chapter 28. Kubernetes NMState 28.1. About the Kubernetes NMState Operator The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster's nodes with NMState. The Kubernetes NMState Operator provides users with functionality to configure various network interface types, DNS, and routing on cluster nodes. Additionally, the daemons on the cluster nodes periodically report on the state of each node's network interfaces to the API server. Important Red Hat supports the Kubernetes NMState Operator in production environments on bare-metal, IBM Power(R), IBM Z(R), IBM(R) LinuxONE, VMware vSphere, and OpenStack installations. Before you can use NMState with OpenShift Container Platform, you must install the Kubernetes NMState Operator. Note The Kubernetes NMState Operator updates the network configuration of a secondary NIC. It cannot update the network configuration of the primary NIC or the br-ex bridge. OpenShift Container Platform uses nmstate to report on and configure the state of the node network. This makes it possible to modify the network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster. Node networking is monitored and updated by the following objects: NodeNetworkState Reports the state of the network on that node. NodeNetworkConfigurationPolicy Describes the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a NodeNetworkConfigurationPolicy manifest to the cluster. NodeNetworkConfigurationEnactment Reports the network policies enacted upon each node. 28.1.1. Installing the Kubernetes NMState Operator You can install the Kubernetes NMState Operator by using the web console or the CLI. 28.1.1.1. Installing the Kubernetes NMState Operator by using the web console You can install the Kubernetes NMState Operator by using the web console. After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. Prerequisites You are logged in as a user with cluster-admin privileges. Procedure Select Operators OperatorHub . In the search field below All Items , enter nmstate and click Enter to search for the Kubernetes NMState Operator. Click on the Kubernetes NMState Operator search result. Click on Install to open the Install Operator window. Click Install to install the Operator. After the Operator finishes installing, click View Operator . Under Provided APIs , click Create Instance to open the dialog box for creating an instance of kubernetes-nmstate . In the Name field of the dialog box, ensure the name of the instance is nmstate. Note The name restriction is a known issue. The instance is a singleton for the entire cluster. Accept the default settings and click Create to create the instance. Summary Once complete, the Operator has deployed the NMState State Controller as a daemon set across all of the cluster nodes. 28.1.1.2. Installing the Kubernetes NMState Operator by using the CLI You can install the Kubernetes NMState Operator by using the OpenShift CLI ( oc) . After it is installed, the Operator can deploy the NMState State Controller as a daemon set across all of the cluster nodes. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Create the nmstate Operator namespace: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate spec: finalizers: - kubernetes EOF Create the OperatorGroup : USD cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate EOF Subscribe to the nmstate Operator: USD cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: stable installPlanApproval: Automatic name: kubernetes-nmstate-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF Confirm the ClusterServiceVersion (CSV) status for the nmstate Operator deployment equals Succeeded : USD oc get clusterserviceversion -n openshift-nmstate \ -o custom-columns=Name:.metadata.name,Phase:.status.phase Example output Name Phase kubernetes-nmstate-operator.4.14.0-202210210157 Succeeded Create an instance of the nmstate Operator: USD cat << EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate EOF Verify that all pods for the NMState Operator are in a Running state: USD oc get pod -n openshift-nmstate Example output Name Ready Status Restarts Age pod/nmstate-handler-wn55p 1/1 Running 0 77s pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s ... 28.1.2. Uninstalling the Kubernetes NMState Operator You can use the Operator Lifecycle Manager (OLM) to uninstall the Kubernetes NMState Operator, but by design OLM does not delete any associated custom resource definitions (CRDs), custom resources (CRs), or API Services. Before you uninstall the Kubernetes NMState Operator from the Subcription resource used by OLM, identify what Kubernetes NMState Operator resources to delete. This identification ensures that you can delete resources without impacting your running cluster. If you need to reinstall the Kubernetes NMState Operator, see "Installing the Kubernetes NMState Operator by using the CLI" or "Installing the Kubernetes NMState Operator by using the web console". Prerequisites You have installed the OpenShift CLI ( oc ). You have installed the jq CLI tool. You are logged in as a user with cluster-admin privileges. Procedure Unsubscribe the Kubernetes NMState Operator from the Subcription resource by running the following command: USD oc delete --namespace openshift-nmstate subscription kubernetes-nmstate-operator Find the ClusterServiceVersion (CSV) resource that associates with the Kubernetes NMState Operator: USD oc get --namespace openshift-nmstate clusterserviceversion Example output that lists a CSV resource NAME DISPLAY VERSION REPLACES PHASE kubernetes-nmstate-operator.v4.18.0 Kubernetes NMState Operator 4.18.0 Succeeded Delete the CSV resource. After you delete the file, OLM deletes certain resources, such as RBAC , that it created for the Operator. USD oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.18.0 Delete the nmstate CR and any associated Deployment resources by running the following commands: USD oc -n openshift-nmstate delete nmstate nmstate USD oc delete --all deployments --namespace=openshift-nmstate After you deleted the nmstate CR, remove the nmstate-console-plugin console plugin name from the console.operator.openshift.io/cluster CR. Store the position of the nmstate-console-plugin entry that exists among the list of enable plugins by running the following command. The following command uses the jq CLI tool to store the index of the entry in an environment variable named INDEX : INDEX=USD(oc get console.operator.openshift.io cluster -o json | jq -r '.spec.plugins | to_entries[] | select(.value == "nmstate-console-plugin") | .key') Remove the nmstate-console-plugin entry from the console.operator.openshift.io/cluster CR by running the following patch command: USD oc patch console.operator.openshift.io cluster --type=json -p "[{\"op\": \"remove\", \"path\": \"/spec/plugins/USDINDEX\"}]" 1 1 INDEX is an auxiliary variable. You can specify a different name for this variable. Delete all the custom resource definitions (CRDs), such as nmstates.nmstate.io , by running the following commands: USD oc delete crd nmstates.nmstate.io USD oc delete crd nodenetworkconfigurationenactments.nmstate.io USD oc delete crd nodenetworkstates.nmstate.io USD oc delete crd nodenetworkconfigurationpolicies.nmstate.io Delete the namespace: USD oc delete namespace kubernetes-nmstate 28.2. Observing and updating the node network state and configuration 28.2.1. Viewing the network state of a node by using the CLI Node network state is the network configuration for all nodes in the cluster. A NodeNetworkState object exists on every node in the cluster. This object is periodically updated and captures the state of the network for that node. Procedure List all the NodeNetworkState objects in the cluster: USD oc get nns Inspect a NodeNetworkState object to view the network on that node. The output in this example has been redacted for clarity: USD oc get nns node01 -o yaml Example output apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: # ... interfaces: # ... route-rules: # ... routes: # ... lastSuccessfulUpdateTime: "2020-01-31T12:14:00Z" 3 1 The name of the NodeNetworkState object is taken from the node. 2 The currentState contains the complete network configuration for the node, including DNS, interfaces, and routes. 3 Timestamp of the last successful update. This is updated periodically as long as the node is reachable and can be used to evalute the freshness of the report. 28.2.2. Viewing the network state of a node from the web console As an administrator, you can use the OpenShift Container Platform web console to observe NodeNetworkState resources and network interfaces, and access network details. Procedure Navigate to Networking NodeNetworkState . In the NodeNetworkState page, you can view the list of NodeNetworkState resources and the corresponding interfaces that are created on the nodes. You can use Filter based on Interface state , Interface type , and IP , or the search bar based on criteria Name or Label , to narrow down the displayed NodeNetworkState resources. To access the detailed information about NodeNetworkState resource, click the NodeNetworkState resource name listed in the Name column . to expand and view the Network Details section for the NodeNetworkState resource, click the > icon . Alternatively, you can click on each interface type under the Network interface column to view the network details. 28.2.3. The NodeNetworkConfigurationPolicy manifest file A NodeNetworkConfigurationPolicy (NNCP) manifest file defines policies that the Kubernetes NMState Operator uses to configure networking for nodes that exist in an OpenShift Container Platform cluster. After you apply a node network policy to a node, the Kubernetes NMState Operator creates an interface on the node. A node network policy includes your requested network configuration and the status of execution for the policy on the cluster as a whole. You can create an NNCP by using either the OpenShift CLI ( oc ) or the OpenShift Container Platform web console. As a postinstallation task you can create an NNCP or edit an existing NNCP. Note Before you create an NNCP, ensure that you read the "Example policy configurations for different interfaces" document. If you want to delete an NNCP, you can use the oc delete nncp command to complete this action. However, this command does not delete any created objects, such as a bridge interface. Deleting the node network policy that added an interface to a node does not change the configuration of the policy on the node. Similarly, removing an interface does not delete the policy, because the Kubernetes NMState Operator recreates the removed interface whenever a pod or a node is restarted. To effectively delete the NNCP, the node network policy, and any created interfaces would typically require the following actions: Edit the NNCP and remove interface details from the file. Ensure that you do not remove name , state , and type parameters from the file. Add state: absent under the interfaces.state section of the NNCP. Run oc apply -f <nncp_file_name> . After the Kubernetes NMState Operator applies the node network policy to each node in your cluster, the interface that was previously created on each node is now marked absent . Run oc delete nncp to delete the NNCP. Additional resources Example policy configurations for different interfaces Removing an interface from nodes 28.2.4. Managing policy from the web console You can update the node network configuration, such as adding or removing interfaces from nodes, by applying NodeNetworkConfigurationPolicy manifests to the cluster. Manage the policy from the web console by accessing the list of created policies in the NodeNetworkConfigurationPolicy page under the Networking menu. This page enables you to create, update, monitor, and delete the policies. 28.2.4.1. Monitoring the policy status You can monitor the policy status from the NodeNetworkConfigurationPolicy page. This page displays all the policies created in the cluster in a tabular format, with the following columns: Name The name of the policy created. Matched nodes The count of nodes where the policies are applied. This could be either a subset of nodes based on the node selector or all the nodes on the cluster. Node network state The enactment state of the matched nodes. You can click on the enactment state and view detailed information on the status. To find the desired policy, you can filter the list either based on enactment state by using the Filter option, or by using the search option. 28.2.4.2. Creating a policy You can create a policy by using either a form or YAML in the web console. Procedure Navigate to Networking NodeNetworkConfigurationPolicy . In the NodeNetworkConfigurationPolicy page, click Create , and select From Form option. In case there are no existing policies, you can alternatively click Create NodeNetworkConfigurationPolicy to createa policy using form. Note To create policy using YAML, click Create , and select With YAML option. The following steps are applicable to create a policy only by using form. Optional: Check the Apply this NodeNetworkConfigurationPolicy only to specific subsets of nodes using the node selector checkbox to specify the nodes where the policy must be applied. Enter the policy name in the Policy name field. Optional: Enter the description of the policy in the Description field. Optional: In the Policy Interface(s) section, a bridge interface is added by default with preset values in editable fields. Edit the values by executing the following steps: Enter the name of the interface in Interface name field. Select the network state from Network state dropdown. The default selected value is Up . Select the type of interface from Type dropdown. The available values are Bridge , Bonding , and Ethernet . The default selected value is Bridge . Note Addition of a VLAN interface by using the form is not supported. To add a VLAN interface, you must use YAML to create the policy. Once added, you cannot edit the policy by using form. Optional: In the IP configuration section, check IPv4 checkbox to assign an IPv4 address to the interface, and configure the IP address assignment details: Click IP address to configure the interface with a static IP address, or DHCP to auto-assign an IP address. If you have selected IP address option, enter the IPv4 address in IPV4 address field, and enter the prefix length in Prefix length field. If you have selected DHCP option, uncheck the options that you want to disable. The available options are Auto-DNS , Auto-routes , and Auto-gateway . All the options are selected by default. Optional: Enter the port number in Port field. Optional: Check the checkbox Enable STP to enable STP. Optional: To add an interface to the policy, click Add another interface to the policy . Optional: To remove an interface from the policy, click icon to the interface. Note Alternatively, you can click Edit YAML on the top of the page to continue editing the form using YAML. Click Create to complete policy creation. 28.2.4.3. Updating the policy 28.2.4.3.1. Updating the policy by using form Procedure Navigate to Networking NodeNetworkConfigurationPolicy . In the NodeNetworkConfigurationPolicy page, click the icon placed to the policy you want to edit, and click Edit . Edit the fields that you want to update. Click Save . Note Addition of a VLAN interface using the form is not supported. To add a VLAN interface, you must use YAML to create the policy. Once added, you cannot edit the policy using form. 28.2.4.3.2. Updating the policy by using YAML Procedure Navigate to Networking NodeNetworkConfigurationPolicy . In the NodeNetworkConfigurationPolicy page, click the policy name under the Name column for the policy you want to edit. Click the YAML tab, and edit the YAML. Click Save . 28.2.4.4. Deleting the policy Procedure Navigate to Networking NodeNetworkConfigurationPolicy . In the NodeNetworkConfigurationPolicy page, click the icon placed to the policy you want to delete, and click Delete . In the pop-up window, enter the policy name to confirm deletion, and click Delete . 28.2.5. Managing policy by using the CLI 28.2.5.1. Creating an interface on nodes Create an interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The manifest details the requested configuration for the interface. By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the spec: nodeSelector parameter and the appropriate <key>:<value> for your node selector. You can configure multiple nmstate-enabled nodes concurrently. The configuration applies to 50% of the nodes in parallel. This strategy prevents the entire cluster from being unavailable if the network connection fails. To apply the policy configuration in parallel to a specific portion of the cluster, use the maxUnavailable field. Procedure Create the NodeNetworkConfigurationPolicy manifest. The following example configures a Linux bridge on all worker nodes and configures the DNS resolver: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 4 Optional: Specifies the maximum number of nmstate-enabled nodes that the policy configuration can be applied to concurrently. This parameter can be set to either a percentage value (string), for example, "10%" , or an absolute value (number), such as 3 . 5 Optional: Human-readable description for the interface. 6 Optional: Specifies the search and server settings for the DNS server. Create the node network policy: USD oc apply -f br1-eth1-policy.yaml 1 1 File name of the node network configuration policy manifest. Additional resources Example for creating multiple interfaces in the same policy Examples of different IP management methods in policies 28.2.5.2. Confirming node network policy updates on nodes When you apply a node network policy, a NodeNetworkConfigurationEnactment object is created for every node in the cluster. The node network configuration enactment is a read-only object that represents the status of execution of the policy on that node. If the policy fails to be applied on the node, the enactment for that node includes a traceback for troubleshooting. Procedure To confirm that a policy has been applied to the cluster, list the policies and their status: USD oc get nncp Optional: If a policy is taking longer than expected to successfully configure, you can inspect the requested state and status conditions of a particular policy: USD oc get nncp <policy> -o yaml Optional: If a policy is taking longer than expected to successfully configure on all nodes, you can list the status of the enactments on the cluster: USD oc get nnce Optional: To view the configuration of a particular enactment, including any error reporting for a failed configuration: USD oc get nnce <node>.<policy> -o yaml 28.2.5.3. Removing an interface from nodes You can remove an interface from one or more nodes in the cluster by editing the NodeNetworkConfigurationPolicy object and setting the state of the interface to absent . Removing an interface from a node does not automatically restore the node network configuration to a state. If you want to restore the state, you will need to define that node network configuration in the policy. If you remove a bridge or bonding interface, any node NICs in the cluster that were previously attached or subordinate to that bridge or bonding interface are placed in a down state and become unreachable. To avoid losing connectivity, configure the node NIC in the same policy so that it has a status of up and either DHCP or a static IP address. Note Deleting the node network policy that added an interface does not change the configuration of the policy on the node. Although a NodeNetworkConfigurationPolicy is an object in the cluster, the object only represents the requested configuration. Similarly, removing an interface does not delete the policy. Procedure Update the NodeNetworkConfigurationPolicy manifest used to create the interface. The following example removes a Linux bridge and configures the eth1 NIC with DHCP to avoid losing connectivity: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 4 Changing the state to absent removes the interface. 5 The name of the interface that is to be unattached from the bridge interface. 6 The type of interface. This example creates an Ethernet networking interface. 7 The requested state for the interface. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. Update the policy on the node and remove the interface: USD oc apply -f <br1-eth1-policy.yaml> 1 1 File name of the policy manifest. 28.2.6. Example policy configurations for different interfaces Before you read the different example NodeNetworkConfigurationPolicy (NNCP) manifest configurations, consider the following factors when you apply a policy to nodes so that your cluster runs under its best performance conditions: When you need to apply a policy to more than one node, create a NodeNetworkConfigurationPolicy manifest for each target node. The Kubernetes NMState Operator applies the policy to each node with a defined NNCP in an unspecified order. Scoping a policy with this approach reduces the length of time for policy application but risks a cluster-wide outage if an error exists in the cluster's configuration. To avoid this type of error, initially apply an NNCP to some nodes, confirm the NNCP is configured correctly for these nodes, and then proceed with applying the policy to the remaining nodes. When you need to apply a policy to many nodes but you only want to create a single NNCP for all the nodes, the Kubernetes NMState Operator applies the policy to each node in sequence. You can set the speed and coverage of policy application for target nodes with the maxUnavailable parameter in the cluster's configuration file. By setting a lower percentage value for the parameter, you can reduce the risk of a cluster-wide outage if the outage impacts the small percentage of nodes that are receiving the policy application. Consider specifying all related network configurations in a single policy. When a node restarts, the Kubernetes NMState Operator cannot control the order to which it applies policies to nodes. The Kubernetes NMState Operator might apply interdependent policies in a sequence that results in a degraded network object. 28.2.6.1. Example: Linux bridge interface node network configuration policy Create a Linux bridge interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for a Linux bridge interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a bridge. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 10 Disables stp in this example. 11 The node NIC to which the bridge attaches. 28.2.6.2. Example: VLAN interface node network configuration policy Create a VLAN interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. Note Define all related configurations for the VLAN interface of a node in a single NodeNetworkConfigurationPolicy manifest. For example, define the VLAN interface for a node and the related routes for the VLAN interface in the same NodeNetworkConfigurationPolicy manifest. When a node restarts, the Kubernetes NMState Operator cannot control the order in which policies are applied. Therefore, if you use separate policies for related network configurations, the Kubernetes NMState Operator might apply these policies in a sequence that results in a degraded network object. The following YAML file is an example of a manifest for a VLAN interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. When deploying on bare metal, only the <interface_name>.<vlan_number> VLAN format is supported. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a VLAN. 7 The requested state for the interface after creation. 8 The node NIC to which the VLAN is attached. 9 The VLAN tag. 28.2.6.3. Example: Node network configuration policy for virtual functions (Technology Preview) Update host network settings for Single Root I/O Virtualization (SR-IOV) network virtual functions (VF) in an existing cluster by applying a NodeNetworkConfigurationPolicy manifest. Important Updating host network settings for SR-IOV network VFs is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can apply a NodeNetworkConfigurationPolicy manifest to an existing cluster to complete the following tasks: Configure QoS or MTU host network settings for VFs to optimize performance. Add, remove, or update VFs for a network interface. Manage VF bonding configurations. Note To update host network settings for SR-IOV VFs by using NMState on physical functions that are also managed through the SR-IOV Network Operator, you must set the externallyManaged parameter in the relevant SriovNetworkNodePolicy resource to true . For more information, see the Additional resources section. The following YAML file is an example of a manifest that defines QoS policies for a VF. This file includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: qos 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 desiredState: interfaces: - name: ens1f0 4 description: Change QOS on VF0 5 type: ethernet 6 state: up 7 ethernet: sr-iov: total-vfs: 3 8 vfs: - id: 0 9 max-tx-rate: 200 10 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example applies to all nodes with the worker role. 4 Name of the physical function (PF) network interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. 7 The requested state for the interface after configuration. 8 The total number of VFs. 9 Identifies the VF with an ID of 0 . 10 Sets a maximum transmission rate, in Mbps, for the VF. This sample value sets a rate of 200 Mbps. The following YAML file is an example of a manifest that creates a VLAN interface on top of a VF and adds it to a bonded network interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: addvf 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" 3 maxUnavailable: 3 desiredState: interfaces: - name: ens1f0v1 4 type: ethernet state: up - name: ens1f0v1.477 5 type: vlan state: up vlan: base-iface: ens1f0v1 6 id: 477 - name: bond0 7 description: Add vf 8 type: bond 9 state: up 10 link-aggregation: mode: active-backup 11 options: primary: ens1f1v0.477 12 port: 13 - ens1f1v0.477 - ens1f0v0.477 - ens1f0v1.477 14 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example applies to all nodes with the worker role. 4 Name of the VF network interface. 5 Name of the VLAN network interface. 6 The VF network interface to which the VLAN interface is attached. 7 Name of the bonding network interface. 8 Optional: Human-readable description of the interface. 9 The type of interface. 10 The requested state for the interface after configuration. 11 The bonding policy for the bond. 12 The primary attached bonding port. 13 The ports for the bonded network interface. 14 In this example, this VLAN network interface is added as an additional interface to the bonded network interface. Additional resources Configuring an SR-IOV network device 28.2.6.4. Example: Bond interface node network configuration policy Create a bond interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. Note OpenShift Container Platform only supports the following bond modes: mode=1 active-backup mode=2 balance-xor mode=4 802.3ad Other bond modes are not supported. The following YAML file is an example of a manifest for a bond interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates a bond. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 10 The driver mode for the bond. This example uses an active backup mode. 11 Optional: This example uses miimon to inspect the bond link every 140ms. 12 The subordinate node NICs in the bond. 13 Optional: The maximum transmission unit (MTU) for the bond. If not specified, this value is set to 1500 by default. 28.2.6.5. Example: Ethernet interface node network configuration policy Configure an Ethernet interface on nodes in the cluster by applying a NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for an Ethernet interface. It includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses a hostname node selector. 4 Name of the interface. 5 Optional: Human-readable description of the interface. 6 The type of interface. This example creates an Ethernet networking interface. 7 The requested state for the interface after creation. 8 Optional: If you do not use dhcp , you can either set a static IP or leave the interface without an IP address. 9 Enables ipv4 in this example. 28.2.6.6. Example: Multiple interfaces in the same node network configuration policy You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest. The following example YAML file creates a bond that is named bond10 across two NICs and VLAN that is named bond10.103 that connects to the bond. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond-vlan 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond10 4 description: Bonding eth2 and eth3 5 type: bond 6 state: up 7 link-aggregation: mode: balance-xor 8 options: miimon: '140' 9 port: 10 - eth2 - eth3 - name: bond10.103 11 description: vlan using bond10 12 type: vlan 13 state: up 14 vlan: base-iface: bond10 15 id: 103 16 ipv4: dhcp: true 17 enabled: true 18 1 Name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. 3 This example uses hostname node selector. 4 11 Name of the interface. 5 12 Optional: Human-readable description of the interface. 6 13 The type of interface. 7 14 The requested state for the interface after creation. 8 The driver mode for the bond. 9 Optional: This example uses miimon to inspect the bond link every 140ms. 10 The subordinate node NICs in the bond. 15 The node NIC to which the VLAN is attached. 16 The VLAN tag. 17 Optional: If you do not use dhcp, you can either set a static IP or leave the interface without an IP address. 18 Enables ipv4 in this example. 28.2.6.7. Example: Network interface with a VRF instance node network configuration policy Associate a Virtual Routing and Forwarding (VRF) instance with a network interface by applying a NodeNetworkConfigurationPolicy custom resource (CR). Important Associating a VRF instance with a network interface is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By associating a VRF instance with a network interface, you can support traffic isolation, independent routing decisions, and the logical separation of network resources. In a bare-metal environment, you can announce load balancer services through interfaces belonging to a VRF instance by using MetalLB. For more information, see the Additional resources section. The following YAML file is an example of associating a VRF instance to a network interface. It includes samples values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: "true" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6 1 The name of the policy. 2 This example applies the policy to all nodes with the label vrf:true . 3 The name of the interface. 4 The type of interface. This example creates a VRF instance. 5 The node interface to which the VRF attaches. 6 The name of the route table ID for the VRF. Additional resources About virtual routing and forwarding Exposing a service through a network VRF 28.2.7. Capturing the static IP of a NIC attached to a bridge Important Capturing the static IP of a NIC is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 28.2.7.1. Example: Linux bridge interface node network configuration policy to inherit static IP address from the NIC attached to the bridge Create a Linux bridge interface on nodes in the cluster and transfer the static IP configuration of the NIC to the bridge by applying a single NodeNetworkConfigurationPolicy manifest to the cluster. The following YAML file is an example of a manifest for a Linux bridge interface. It includes sample values that you must replace with your own information. apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: "" capture: eth1-nic: interfaces.name=="eth1" 3 eth1-routes: routes.running.-hop-interface=="eth1" br1-routes: capture.eth1-routes | routes.running.-hop-interface := "br1" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: "{{ capture.eth1-nic.interfaces.0.ipv4 }}" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: "{{ capture.br1-routes.routes.running }}" 1 The name of the policy. 2 Optional: If you do not include the nodeSelector parameter, the policy applies to all nodes in the cluster. This example uses the node-role.kubernetes.io/worker: "" node selector to select all worker nodes in the cluster. 3 The reference to the node NIC to which the bridge attaches. 4 The type of interface. This example creates a bridge. 5 The IP address of the bridge interface. This value matches the IP address of the NIC which is referenced by the spec.capture.eth1-nic entry. 6 The node NIC to which the bridge attaches. Additional resources The NMPolicy project - Policy syntax 28.2.8. Examples: IP management The following example configuration snippets show different methods of IP management. These examples use the ethernet interface type to simplify the example while showing the related context in the policy configuration. These IP management examples can be used with the other interface types. 28.2.8.1. Static The following snippet statically configures an IP address on the Ethernet interface: # ... interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true # ... 1 Replace this value with the static IP address for the interface. 28.2.8.2. No IP address The following snippet ensures that the interface has no IP address: # ... interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false # ... Important Always set the state parameter to up when you set both the ipv4.enabled and the ipv6.enabled parameter to false to disable an interface. If you set state: down with this configuration, the interface receives a DHCP IP address because of automatic DHCP assignment. 28.2.8.3. Dynamic host configuration The following snippet configures an Ethernet interface that uses a dynamic IP address, gateway address, and DNS: # ... interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true # ... The following snippet configures an Ethernet interface that uses a dynamic IP address but does not use a dynamic gateway address or DNS: # ... interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true # ... 28.2.8.4. DNS By default, the nmstate API stores DNS values globally as against storing them in a network interface. For certain situations, you must configure a network interface to store DNS values. Tip Setting a DNS configuration is comparable to modifying the /etc/resolv.conf file. To define a DNS configuration for a network interface, you must initially specify the dns-resolver section in the network interface's YAML configuration file. To apply an NNCP configuration to your network interface, you need to run the oc apply -f <nncp_file_name> command. Important You cannot use the br-ex bridge, an OVN-Kubernetes-managed Open vSwitch bridge, as the interface when configuring DNS resolvers unless you manually configured a customized br-ex bridge. For more information, see "Creating a manifest object that includes a customized br-ex bridge" in the Deploying installer-provisioned clusters on bare metal document or the Installing a user-provisioned cluster on bare metal document. The following example shows a default situation that stores DNS values globally: Configure a static DNS without a network interface. Note that when updating the /etc/resolv.conf file on a host node, you do not need to specify an interface, IPv4 or IPv6, in the NodeNetworkConfigurationPolicy (NNCP) manifest. Example of a DNS configuration for a network interface that globally stores DNS values apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-dns-testing spec: nodeSelector: kubernetes.io/hostname: <target_node> desiredState: dns-resolver: config: search: - example.com - example.org server: - 2001:db8:f::1 - 192.0.2.251 # ... Important You can specify DNS options under the dns-resolver.config section of your NNCP file as demonstrated in the following example: # ... desiredState: dns-resolver: config: search: options: - timeout:2 - attempts:3 # ... If you want to remove the DNS options from your network interface, apply the following configuration to your NNCP and then run the oc apply -f <nncp_file_name> command: # ... dns-resolver: config: {} interfaces: [] # ... The following examples show situations that require configuring a network interface to store DNS values: If you want to rank a static DNS name server over a dynamic DNS name server, define the interface that runs either the Dynamic Host Configuration Protocol (DHCP) or the IPv6 Autoconfiguration ( autoconf ) mechanism in the network interface YAML configuration file. Example configuration that adds 192.0.2.1 to DNS name servers retrieved from the DHCPv4 network protocol # ... dns-resolver: config: server: - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true # ... If you need to configure a network interface to store DNS values instead of adopting the default method, which uses the nmstate API to store DNS values globally, you can set static DNS values and static IP addresses in the network interface YAML file. Important Storing DNS values at the network interface level might cause name resolution issues after you attach the interface to network components, such as an Open vSwitch (OVS) bridge, a Linux bridge, or a bond. Example configuration that stores DNS values at the interface level # ... dns-resolver: config: search: - example.com - example.org server: - 2001:db8:1::d1 - 2001:db8:1::d2 - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 prefix-length: 24 dhcp: false enabled: true ipv6: address: - ip: 2001:db8:1::1 prefix-length: 64 dhcp: false enabled: true autoconf: false # ... If you want to set static DNS search domains and dynamic DNS name servers for your network interface, define the dynamic interface that runs either the Dynamic Host Configuration Protocol (DHCP) or the IPv6 Autoconfiguration ( autoconf ) mechanism in the network interface YAML configuration file. Example configuration that sets example.com and example.org static DNS search domains along with dynamic DNS name server settings # ... dns-resolver: config: search: - example.com - example.org server: [] interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true ipv6: enabled: true dhcp: true autoconf: true auto-dns: true # ... 28.2.8.5. Static routing The following snippet configures a static route and a static IP on interface eth1 . dns-resolver: config: # ... interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false enabled: true address: - ip: 192.0.2.251 1 prefix-length: 24 routes: config: - destination: 198.51.100.0/24 metric: 150 -hop-address: 192.0.2.1 2 -hop-interface: eth1 table-id: 254 # ... 1 The static IP address for the Ethernet interface. 2 The hop address for the node traffic. This must be in the same subnet as the IP address set for the Ethernet interface. Important You cannot use the OVN-Kubernetes br-ex bridge as the hop interface when configuring a static route unless you manually configured a customized br-ex bridge. For more information, see "Creating a manifest object that includes a customized br-ex bridge" in the Deploying installer-provisioned clusters on bare metal document or the Installing a user-provisioned cluster on bare metal document. 28.3. Troubleshooting node network configuration If the node network configuration encounters an issue, the policy is automatically rolled back and the enactments report failure. This includes issues such as: The configuration fails to be applied on the host. The host loses connection to the default gateway. The host loses connection to the API server. 28.3.1. Troubleshooting an incorrect node network configuration policy configuration You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy. If you applied an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy. The example attempts to apply a Linux bridge policy to a cluster that has three control plane nodes and three compute nodes. The policy is not applied because the policy references the wrong interface. To find an error, you need to investigate the available NMState resources. You can then update the policy with the correct configuration. Prerequisites You ensured that an ens01 interface does not exist on your Linux system. Procedure Create a policy on your cluster. The following example creates a simple bridge, br1 that has ens01 as its member: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01 # ... Apply the policy to your network interface: USD oc apply -f ens01-bridge-testfail.yaml Example output nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created Verify the status of the policy by running the following command: USD oc get nncp The output shows that the policy failed: Example output NAME STATUS ens01-bridge-testfail FailedToConfigure The policy status alone does not indicate if it failed on all nodes or a subset of nodes. List the node network configuration enactments to see if the policy was successful on any of the nodes. If the policy failed for only a subset of nodes, the output suggests that the problem is with a specific node configuration. If the policy failed on all nodes, the output suggests that the problem is with the policy. USD oc get nnce The output shows that the policy failed on all nodes: Example output NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure View one of the failed enactments. The following command uses the output tool jsonpath to filter the output: USD oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}' Example output [2024-10-10T08:40:46Z INFO nmstatectl] Nmstate version: 2.2.37 NmstateError: InvalidArgument: Controller interface br1 is holding unknown port ens01 The example shows the output from an InvalidArgument error that indicates that the ens01 is an unknown port. For this example, you might need to change the port configuration in the policy configuration file. To ensure that the policy is configured properly, view the network configuration for one or all of the nodes by requesting the NodeNetworkState object. The following command returns the network configuration for the control-plane-1 node: USD oc get nns control-plane-1 -o yaml The output shows that the interface name on the nodes is ens1 but the failed policy incorrectly uses ens01 : Example output - ipv4: # ... name: ens1 state: up type: ethernet Correct the error by editing the existing policy: USD oc edit nncp ens01-bridge-testfail # ... port: - name: ens1 Save the policy to apply the correction. Check the status of the policy to ensure it updated successfully: USD oc get nncp Example output NAME STATUS ens01-bridge-testfail SuccessfullyConfigured The updated policy is successfully configured on all nodes in the cluster. 28.3.2. Troubleshooting DNS connectivity issues in a disconnected environment If you experience DNS connectivity issues when configuring nmstate in a disconnected environment, you can configure the DNS server to resolve the list of name servers for the domain root-servers.net . Important Ensure that the DNS server includes a name server (NS) entry for the root-servers.net zone. The DNS server does not need to forward a query to an upstream resolver, but the server must return a correct answer for the NS query. 28.3.2.1. Configuring the bind9 DNS named server For a cluster configured to query a bind9 DNS server, you can add the root-servers.net zone to a configuration file that contains at least one NS record. For example you can use the /var/named/named.localhost as a zone file that already matches this criteria. Procedure Add the root-servers.net zone at the end of the /etc/named.conf configuration file by running the following command: USD cat >> /etc/named.conf <<EOF zone "root-servers.net" IN { type master; file "named.localhost"; }; EOF Restart the named service by running the following command: USD systemctl restart named Confirm that the root-servers.net zone is present by running the following command: USD journalctl -u named|grep root-servers.net Example output Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0 Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0 Verify that the DNS server can resolve the NS record for the root-servers.net domain by running the following command: USD host -t NS root-servers.net. 127.0.0.1 Example output Using domain server: Name: 127.0.0.1 Address: 127.0.0.53 Aliases: root-servers.net name server root-servers.net. 28.3.2.2. Configuring the dnsmasq DNS server If you are using dnsmasq as the DNS server, you can delegate resolution of the root-servers.net domain to another DNS server, for example, by creating a new configuration file that resolves root-servers.net using a DNS server that you specify. Create a configuration file that delegates the domain root-servers.net to another DNS server by running the following command: USD echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.conf Restart the dnsmasq service by running the following command: USD systemctl restart dnsmasq Confirm that the root-servers.net domain is delegated to another DNS server by running the following command: USD journalctl -u dnsmasq|grep root-servers.net Example output Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.net Verify that the DNS server can resolve the NS record for the root-servers.net domain by running the following command: USD host -t NS root-servers.net. 127.0.0.1 Example output Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: root-servers.net name server root-servers.net. | [
"cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: openshift-nmstate spec: finalizers: - kubernetes EOF",
"cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-nmstate namespace: openshift-nmstate spec: targetNamespaces: - openshift-nmstate EOF",
"cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: kubernetes-nmstate-operator namespace: openshift-nmstate spec: channel: stable installPlanApproval: Automatic name: kubernetes-nmstate-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF",
"oc get clusterserviceversion -n openshift-nmstate -o custom-columns=Name:.metadata.name,Phase:.status.phase",
"Name Phase kubernetes-nmstate-operator.4.14.0-202210210157 Succeeded",
"cat << EOF | oc apply -f - apiVersion: nmstate.io/v1 kind: NMState metadata: name: nmstate EOF",
"oc get pod -n openshift-nmstate",
"Name Ready Status Restarts Age pod/nmstate-handler-wn55p 1/1 Running 0 77s pod/nmstate-operator-f6bb869b6-v5m92 1/1 Running 0 4m51s",
"oc delete --namespace openshift-nmstate subscription kubernetes-nmstate-operator",
"oc get --namespace openshift-nmstate clusterserviceversion",
"NAME DISPLAY VERSION REPLACES PHASE kubernetes-nmstate-operator.v4.18.0 Kubernetes NMState Operator 4.18.0 Succeeded",
"oc delete --namespace openshift-nmstate clusterserviceversion kubernetes-nmstate-operator.v4.18.0",
"oc -n openshift-nmstate delete nmstate nmstate",
"oc delete --all deployments --namespace=openshift-nmstate",
"INDEX=USD(oc get console.operator.openshift.io cluster -o json | jq -r '.spec.plugins | to_entries[] | select(.value == \"nmstate-console-plugin\") | .key')",
"oc patch console.operator.openshift.io cluster --type=json -p \"[{\\\"op\\\": \\\"remove\\\", \\\"path\\\": \\\"/spec/plugins/USDINDEX\\\"}]\" 1",
"oc delete crd nmstates.nmstate.io",
"oc delete crd nodenetworkconfigurationenactments.nmstate.io",
"oc delete crd nodenetworkstates.nmstate.io",
"oc delete crd nodenetworkconfigurationpolicies.nmstate.io",
"oc delete namespace kubernetes-nmstate",
"oc get nns",
"oc get nns node01 -o yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 maxUnavailable: 3 4 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 5 type: linux-bridge state: up ipv4: dhcp: true enabled: true auto-dns: false bridge: options: stp: enabled: false port: - name: eth1 dns-resolver: 6 config: search: - example.com - example.org server: - 8.8.8.8",
"oc apply -f br1-eth1-policy.yaml 1",
"oc get nncp",
"oc get nncp <policy> -o yaml",
"oc get nnce",
"oc get nnce <node>.<policy> -o yaml",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"oc apply -f <br1-eth1-policy.yaml> 1",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: qos 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: ens1f0 4 description: Change QOS on VF0 5 type: ethernet 6 state: up 7 ethernet: sr-iov: total-vfs: 3 8 vfs: - id: 0 9 max-tx-rate: 200 10",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: addvf 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 maxUnavailable: 3 desiredState: interfaces: - name: ens1f0v1 4 type: ethernet state: up - name: ens1f0v1.477 5 type: vlan state: up vlan: base-iface: ens1f0v1 6 id: 477 - name: bond0 7 description: Add vf 8 type: bond 9 state: up 10 link-aggregation: mode: active-backup 11 options: primary: ens1f1v0.477 12 port: 13 - ens1f1v0.477 - ens1f0v0.477 - ens1f0v1.477 14",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: bond-vlan 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond10 4 description: Bonding eth2 and eth3 5 type: bond 6 state: up 7 link-aggregation: mode: balance-xor 8 options: miimon: '140' 9 port: 10 - eth2 - eth3 - name: bond10.103 11 description: vlan using bond10 12 type: vlan 13 state: up 14 vlan: base-iface: bond10 15 id: 103 16 ipv4: dhcp: true 17 enabled: true 18",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: \"true\" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-copy-ipv4-policy 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" capture: eth1-nic: interfaces.name==\"eth1\" 3 eth1-routes: routes.running.next-hop-interface==\"eth1\" br1-routes: capture.eth1-routes | routes.running.next-hop-interface := \"br1\" desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port type: linux-bridge 4 state: up ipv4: \"{{ capture.eth1-nic.interfaces.0.ipv4 }}\" 5 bridge: options: stp: enabled: false port: - name: eth1 6 routes: config: \"{{ capture.br1-routes.routes.running }}\"",
"interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true",
"interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false",
"interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true",
"interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: worker-0-dns-testing spec: nodeSelector: kubernetes.io/hostname: <target_node> desiredState: dns-resolver: config: search: - example.com - example.org server: - 2001:db8:f::1 - 192.0.2.251",
"desiredState: dns-resolver: config: search: options: - timeout:2 - attempts:3",
"dns-resolver: config: {} interfaces: []",
"dns-resolver: config: server: - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true",
"dns-resolver: config: search: - example.com - example.org server: - 2001:db8:1::d1 - 2001:db8:1::d2 - 192.0.2.1 interfaces: - name: eth1 type: ethernet state: up ipv4: address: - ip: 192.0.2.251 prefix-length: 24 dhcp: false enabled: true ipv6: address: - ip: 2001:db8:1::1 prefix-length: 64 dhcp: false enabled: true autoconf: false",
"dns-resolver: config: search: - example.com - example.org server: [] interfaces: - name: eth1 type: ethernet state: up ipv4: enabled: true dhcp: true auto-dns: true ipv6: enabled: true dhcp: true autoconf: true auto-dns: true",
"dns-resolver: config: interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false enabled: true address: - ip: 192.0.2.251 1 prefix-length: 24 routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01",
"oc apply -f ens01-bridge-testfail.yaml",
"nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail FailedToConfigure",
"oc get nnce",
"NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure",
"oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'",
"[2024-10-10T08:40:46Z INFO nmstatectl] Nmstate version: 2.2.37 NmstateError: InvalidArgument: Controller interface br1 is holding unknown port ens01",
"oc get nns control-plane-1 -o yaml",
"- ipv4: name: ens1 state: up type: ethernet",
"oc edit nncp ens01-bridge-testfail",
"port: - name: ens1",
"oc get nncp",
"NAME STATUS ens01-bridge-testfail SuccessfullyConfigured",
"cat >> /etc/named.conf <<EOF zone \"root-servers.net\" IN { type master; file \"named.localhost\"; }; EOF",
"systemctl restart named",
"journalctl -u named|grep root-servers.net",
"Jul 03 15:16:26 rhel-8-10 bash[xxxx]: zone root-servers.net/IN: loaded serial 0 Jul 03 15:16:26 rhel-8-10 named[xxxx]: zone root-servers.net/IN: loaded serial 0",
"host -t NS root-servers.net. 127.0.0.1",
"Using domain server: Name: 127.0.0.1 Address: 127.0.0.53 Aliases: root-servers.net name server root-servers.net.",
"echo 'server=/root-servers.net/<DNS_server_IP>'> /etc/dnsmasq.d/delegate-root-servers.net.conf",
"systemctl restart dnsmasq",
"journalctl -u dnsmasq|grep root-servers.net",
"Jul 03 15:31:25 rhel-8-10 dnsmasq[1342]: using nameserver 192.168.1.1#53 for domain root-servers.net",
"host -t NS root-servers.net. 127.0.0.1",
"Using domain server: Name: 127.0.0.1 Address: 127.0.0.1#53 Aliases: root-servers.net name server root-servers.net."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/kubernetes-nmstate |
Builds using Shipwright | Builds using Shipwright Red Hat OpenShift Service on AWS 4 An extensible build framework to build container images on an OpenShift cluster Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/builds_using_shipwright/index |
Chapter 4. Troubleshooting a cluster update | Chapter 4. Troubleshooting a cluster update 4.1. Gathering data about your cluster update When reaching out to Red Hat support for issues with an update, it is important to provide data for the support team to use for troubleshooting your failed cluster update. 4.1.1. Gathering log data for a support case To gather data from your cluster, including log data, use the oc adm must-gather command. See Gathering data about your cluster . 4.1.2. Gathering ClusterVersion history The Cluster Version Operator (CVO) records updates made to a cluster, known as the ClusterVersion history. The entries can reveal correlation between changes in cluster behavior with potential triggers, although correlation does not imply causation. Note The initial, minor, and z-stream version updates are stored by the ClusterVersion history. However, the ClusterVersion history has a size limit. If the limit is reached, the oldest z-stream updates in minor versions are pruned to accommodate the limit. You can view the ClusterVersion history by using the OpenShift Container Platform web console or by using the OpenShift CLI ( oc ). 4.1.2.1. Gathering ClusterVersion history in the OpenShift Container Platform web console You can view the ClusterVersion history in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the OpenShift Container Platform web console. Procedure From the web console, click Administration Cluster Settings and review the contents of the Details tab. 4.1.2.2. Gathering ClusterVersion history using the OpenShift CLI ( oc ) You can view the ClusterVersion history using the OpenShift CLI ( oc ). Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure View the cluster update history by entering the following command: USD oc describe clusterversions/version Example output Desired: Channels: candidate-4.13 candidate-4.14 fast-4.13 fast-4.14 stable-4.13 Image: quay.io/openshift-release-dev/ocp-release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4 URL: https://access.redhat.com/errata/RHSA-2023:6130 Version: 4.13.19 History: Completion Time: 2023-11-07T20:26:04Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4 Started Time: 2023-11-07T19:11:36Z State: Completed Verified: true Version: 4.13.19 Completion Time: 2023-10-04T18:53:29Z Image: quay.io/openshift-release-dev/ocp-release@sha256:eac141144d2ecd6cf27d24efe9209358ba516da22becc5f0abc199d25a9cfcec Started Time: 2023-10-04T17:26:31Z State: Completed Verified: true Version: 4.13.13 Completion Time: 2023-09-26T14:21:43Z Image: quay.io/openshift-release-dev/ocp-release@sha256:371328736411972e9640a9b24a07be0af16880863e1c1ab8b013f9984b4ef727 Started Time: 2023-09-26T14:02:33Z State: Completed Verified: false Version: 4.13.12 Observed Generation: 4 Version Hash: CMLl3sLq-EA= Events: <none> Additional resources Gathering data about your cluster | [
"oc describe clusterversions/version",
"Desired: Channels: candidate-4.13 candidate-4.14 fast-4.13 fast-4.14 stable-4.13 Image: quay.io/openshift-release-dev/ocp-release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4 URL: https://access.redhat.com/errata/RHSA-2023:6130 Version: 4.13.19 History: Completion Time: 2023-11-07T20:26:04Z Image: quay.io/openshift-release-dev/ocp-release@sha256:a148b19231e4634196717c3597001b7d0af91bf3a887c03c444f59d9582864f4 Started Time: 2023-11-07T19:11:36Z State: Completed Verified: true Version: 4.13.19 Completion Time: 2023-10-04T18:53:29Z Image: quay.io/openshift-release-dev/ocp-release@sha256:eac141144d2ecd6cf27d24efe9209358ba516da22becc5f0abc199d25a9cfcec Started Time: 2023-10-04T17:26:31Z State: Completed Verified: true Version: 4.13.13 Completion Time: 2023-09-26T14:21:43Z Image: quay.io/openshift-release-dev/ocp-release@sha256:371328736411972e9640a9b24a07be0af16880863e1c1ab8b013f9984b4ef727 Started Time: 2023-09-26T14:02:33Z State: Completed Verified: false Version: 4.13.12 Observed Generation: 4 Version Hash: CMLl3sLq-EA= Events: <none>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/updating_clusters/troubleshooting-a-cluster-update |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.9/proc-providing-feedback-on-redhat-documentation |
4.8. RHEA-2012:0838 - new package: java-1.7.0-oracle | 4.8. RHEA-2012:0838 - new package: java-1.7.0-oracle New java-1.7.0-oracle package is now available for Red Hat Enterprise Linux 6. The java-1.7.0-oracle package provides the Oracle Java 7 Runtime Environment and the Oracle Java 7 Software Development Kit. This update adds the java-1.7.0-oracle packages to Red Hat Enterprise Linux 6. (BZ# 720928 ) Note Before applying this update, make sure that any Oracle Java packages have been removed. All users who require java-1.7.0-oracle should install these new packages. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/java-1_7_0-oracle |
Chapter 1. Understanding smart card authentication | Chapter 1. Understanding smart card authentication Authentication based on smart cards is an alternative to passwords. You can store user credentials on a smart card in the form of a private key and a certificate, and special software and hardware is used to access them. Place the smart card into a reader or a USB port and supply the PIN code for the smart card instead of providing your password. This section describes what a smart card is and how smart card authentication works. It describes the tools that you can use to read and manipulate smart card content. It also provides sample use cases and describes the setup of both the IdM server and IdM client for smart card authentication. Note If you want to start to use smart card authentication, see the hardware requirements: Smart Card support in RHEL9 . 1.1. What is a smart card A smart card is a physical device, usually a plastic card with a microprocessor, that can provide personal authentication using certificates stored on the card. Personal authentication means that you can use smart cards in the same way as user passwords. You can store user credentials on the smart card in the form of a private key and a certificate, and special software and hardware is used to access them. You place the smart card into a reader or a USB socket and supply the PIN code for the smart card instead of providing your password. 1.2. What is smart card authentication Public-key based authentication and certificate based authentication are two widely used alternatives to password based authentication. Your identity is confirmed by using public and private keys instead of your password. A certificate is an electronic document used to identify an individual, a server, a company, or other entity and to associate that identity with a public key. Like a driver's license or passport, a certificate provides generally recognized proof of a person's identity. Public-key cryptography uses certificates to address the problem of impersonation. In the case of smart card authentication, your user credentials, that is your public and private keys and certificate, are stored on a smart card and can only be used after the smart card is inserted into the reader and a PIN is provided. As you need to possess a physical device, the smart card, and know its PIN, smart card authentication is considered as a type of two factor authentication. 1.2.1. Examples of smart card authentication in IdM The following examples describe two simple scenarios on how you can use smart cards in IdM. 1.2.1.1. Logging in to your system with a smart card You can use a smart card to authenticate to a RHEL system as a local user. If your system is configured to enforce smart card login, you are prompted to insert your smart card and enter its PIN and, if that fails, you cannot log in to your system. Alternatively, you can configure your system to authenticate using either smart card authentication or your user name and password. In this case, if you do not have your smart card inserted, you are prompted for your user name and password. 1.2.1.2. Logging in to GDM with lock on removal You can activate the lock on removal function if you have configured smart card authentication on your RHEL system. If you are logged in to the GNOME Display Manager (GDM) and you remove your smart card, screen lock is enabled and you must reinsert your smart card and authenticate with the PIN to unlock the screen. You cannot use your user name and password to authenticate. Note If you are logged in to GDM and you remove your smart card, screen lock is enabled and you must reinsert your smart card and authenticate with the PIN to unlock the screen. 1.3. Smart card authentication options in RHEL You can configure how you want smart card authentication to work in a particular Identity Management (IdM) client by using the authselect command, authselect enable-feature <smartcardoption> . The following smart card options are available: with-smartcard : Users can authenticate with the user name and password or with their smart card. with-smartcard-required : Users can authenticate with their smart cards, and password authentication is disabled. You cannot access the system without your smart card. Once you have authenticated with your smart card, you can stay logged in even if your smart card is removed from its reader. Note The with-smartcard-required option only enforces exclusive smart card authentication for login services, such as login , gdm , xdm , xscreensaver , and gnome-screensaver . For other services, such as su or sudo for switching users, smart card authentication is not enforced and if your smart card is not inserted, you are prompted for a password. with-smartcard-lock-on-removal : Users can authenticate with their smart card. However, if you remove your smart card from its reader, you are automatically locked out of the system. You cannot use password authentication. Note The with-smartcard-lock-on-removal option only works on systems with the GNOME desktop environment. If you are using a system that is tty or console based and you remove your smart card from its reader, you are not automatically locked out of the system. For more information, see Configuring smart cards using authselect . 1.4. Tools for managing smart cards and their contents You can use many different tools to manage the keys and certificates stored on your smart cards. You can use these tools to do the following: List available smart card readers connected to a system. List available smart cards and view their contents. Manipulate the smart card content, that is the keys and certificates. There are many tools that provide similar functionality but some work at different layers of your system. Smart cards are managed on multiple layers by multiple components. On the lower level, the operating system communicates with the smart card reader using the PC/SC protocol, and this communication is handled by the pcsc-lite daemon. The daemon forwards the commands received to the smart card reader typically over USB, which is handled by low-level CCID driver. The PC/SC low level communication is rarely seen on the application level. The main method in RHEL for applications to access smart cards is via a higher level application programming interface (API), the OASIS PKCS#11 API, which abstracts the card communication to specific commands that operate on cryptographic objects, for example, private keys. Smart card vendors provide a shared module, such as an .so file, which follows the PKCS#11 API and serves as a driver for the smart card. You can use the following tools to manage your smart cards and their contents: OpenSC tools: work with the drivers implemented in opensc . opensc-tool: perform smart card operations. pkcs15-tool: manage the PKCS#15 data structures on smart cards, such as listing and reading PINs, keys, and certificates stored on the token. pkcs11-tool: manage the PKCS#11 data objects on smart cards, such as listing and reading PINs, keys, and certificates stored on the token. GnuTLS utils: an API for applications to enable secure communication over the network transport layer, as well as interfaces to access X.509, PKCS#12, OpenPGP, and other structures. p11tool: perform operations on PKCS#11 smart cards and security modules. certtool: parse and generate X.509 certificates, requests, and private keys. Network Security Services (NSS) Tools: a set of libraries designed to support the cross-platform development of security-enabled client and server applications. Applications built with NSS can support SSL v3, TLS, PKCS #5, PKCS #7, PKCS #11, PKCS #12, S/MIME, X.509 v3 certificates, and other security standards. modutil: manage PKCS#11 module information with the security module database. certutil: manage keys and certificates in both NSS databases and other NSS tokens. For more information about using these tools to troubleshoot issues with authenticating using a smart card, see Troubleshooting authentication with smart cards . Additional resources opensc-tool , pkcs15-tool , and pkcs11-tool man pages on your system p11tool and certtool man pages on your system modutil and certutil man pages on your system 1.5. Certificates and smart card authentication If you use Identity Management (IdM) or Active Directory (AD) to manage identity stores, authentication, policies, and authorization policies in your domain, the certificates used for authentication are generated by IdM or AD, respectively. You can also use certificates provided by an external certificate authority and in this case you must configure Active Directory or IdM to accept certificates from the external provider. If the user is not part of a domain, you can use a certificate generated by a local certificate authority. For details, refer to the following sections: Configuring Identity Management for smart card authentication Configuring certificates issued by ADCS for smart card authentication in IdM Managing externally signed certificates for IdM users, hosts, and services Configuring and importing local certificates to a smart card For a full list of certificates eligible for smart card authentication, see Certificates eligible for smart cards . 1.6. Required steps for smart card authentication in IdM You must ensure the following steps have been followed before you can authenticate with a smart card in Identity Management (IdM): Configure your IdM server for smart card authentication. See Configuring the IdM server for smart card authentication Configure your IdM client for smart card authentication. See Configuring the IdM client for smart card authentication Add the certificate to the user entry in IdM. See Adding a certificate to a user entry in the IdM Web UI Store your keys and certificates on the smart card. See Storing a certificate on a smart card 1.7. Required steps for smart card authentication with certificates issued by Active Directory You must ensure the following steps have been followed before you can authenticate with a smart card with certificates issued by Active Directory (AD): Copy the CA and user certificates from Active Directory to the IdM server and client . Configure the IdM server and clients for smart card authentication using ADCS certificates . Convert the PFX (PKCS#12) file to be able to store the certificate and private key on the smart card . Configure timeouts in the sssd.conf file . Create certificate mapping rules for smart card authentication . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_smart_card_authentication/assembly_understanding-smart-card-authentication_managing-smart-card-authentication |
Chapter 11. Pacemaker Rules | Chapter 11. Pacemaker Rules Rules can be used to make your configuration more dynamic. One use of rules might be to assign machines to different processing groups (using a node attribute) based on time and to then use that attribute when creating location constraints. Each rule can contain a number of expressions, date-expressions and even other rules. The results of the expressions are combined based on the rule's boolean-op field to determine if the rule ultimately evaluates to true or false . What happens depends on the context in which the rule is being used. Table 11.1. Properties of a Rule Field Description role Limits the rule to apply only when the resource is in that role. Allowed values: Started , Slave, and Master . NOTE: A rule with role="Master" cannot determine the initial location of a clone instance. It will only affect which of the active instances will be promoted. score The score to apply if the rule evaluates to true . Limited to use in rules that are part of location constraints. score-attribute The node attribute to look up and use as a score if the rule evaluates to true . Limited to use in rules that are part of location constraints. boolean-op How to combine the result of multiple expression objects. Allowed values: and and or . The default value is and . 11.1. Node Attribute Expressions Node attribute expressions are used to control a resource based on the attributes defined by a node or nodes. Table 11.2. Properties of an Expression Field Description attribute The node attribute to test type Determines how the value(s) should be tested. Allowed values: string , integer , version . The default value is string operation The comparison to perform. Allowed values: * lt - True if the node attribute's value is less than value * gt - True if the node attribute's value is greater than value * lte - True if the node attribute's value is less than or equal to value * gte - True if the node attribute's value is greater than or equal to value * eq - True if the node attribute's value is equal to value * ne - True if the node attribute's value is not equal to value * defined - True if the node has the named attribute * not_defined - True if the node does not have the named attribute value User supplied value for comparison (required) In addition to any attributes added by the administrator, the cluster defines special, built-in node attributes for each node that can also be used, as described in Table 11.3, "Built-in Node Attributes" . Table 11.3. Built-in Node Attributes Name Description #uname Node name #id Node ID #kind Node type. Possible values are cluster , remote , and container . The value of kind is remote . for Pacemaker Remote nodes created with the ocf:pacemaker:remote resource, and container for Pacemaker Remote guest nodes and bundle nodes. #is_dc true if this node is a Designated Controller (DC), false otherwise #cluster_name The value of the cluster-name cluster property, if set #site_name The value of the site-name node attribute, if set, otherwise identical to #cluster-name #role The role the relevant multistate resource has on this node. Valid only within a rule for a location constraint for a multistate resource. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-pacemakerrules-haar |
4.75. gnome-screensaver | 4.75. gnome-screensaver 4.75.1. RHEA-2011:1652 - gnome-screensaver bug fix and enhancement update An updated gnome-screensaver package that fixes various bugs and adds one enhancement is now available for Red Hat Enterprise Linux 6. The gnome-screensaver package contains the GNOME project's official screen saver program. It is designed for improved integration with the GNOME desktop, including themeability, language support, and Human Interface Guidelines (HIG) compliance. It also provides screen-locking and fast user-switching from a locked screen. Bug Fixes BZ# 648850 When the user locked the screen and the X Window System did not support the X Resize, Rotate (XRandR) or XF86VM gamma fade extensions, then the gnome-screensaver utility terminated with a segmentation fault. With this update, additional checks are made before calling the fade_setup() function, and gnome-screensaver no longer terminates. BZ# 697892 Prior to this update, the Unlock dialog box arbitrarily changed between the monitors in dual head setups, based on the position of the mouse pointer. The Unlock dialog box is now placed on a consistent monitor instead of where the mouse is located. BZ# 719023 Previously, when docking a laptop and using an external monitor, parts of the background got cut off due to incorrect logic for determining monitor dimensions. With this update, the source code is modified and the login screen is now displayed correctly. BZ# 740892 Previously, in rare cases, the screen saver entered a deadlock if monitors were removed during the fade up. The screen was locked as a consequence. This update modifies gnome-screensaver so that the screen saver responds as expected. Enhancement BZ# 677580 Previously, there was no indicator of the keyboard layout when the screen was locked. Users who used more than one layout did not know which layout was active. Consequently, users could be forced to type the password several times. This update adds the missing keyboard layout indicator. All users of gnome-screensaver are advised to upgrade to this updated package, which fixes these bugs and adds this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/gnome-screensaver |
probe::linuxmib.DelayedACKs | probe::linuxmib.DelayedACKs Name probe::linuxmib.DelayedACKs - Count of delayed acks Synopsis linuxmib.DelayedACKs Values op Value to be added to the counter (default value of 1) sk Pointer to the struct sock being acted on Description The packet pointed to by skb is filtered by the function linuxmib_filter_key . If the packet passes the filter is is counted in the global DelayedACKs (equivalent to SNMP's MIB LINUX_MIB_DELAYEDACKS) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-linuxmib-delayedacks |
Chapter 1. Preparing to install on a single node | Chapter 1. Preparing to install on a single node 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You have read the documentation on selecting a cluster installation method and preparing it for users . 1.2. About OpenShift on a single node You can create a single-node cluster with standard installation methods. OpenShift Container Platform on a single node is a specialized installation that requires the creation of a special Ignition configuration file. The primary use case is for edge computing workloads, including intermittent connectivity, portable clouds, and 5G radio access networks (RAN) close to a base station. The major tradeoff with an installation on a single node is the lack of high availability. Important The use of OpenShiftSDN with single-node OpenShift is not supported. OVN-Kubernetes is the default network plugin for single-node OpenShift deployments. 1.3. Requirements for installing OpenShift on a single node Installing OpenShift Container Platform on a single node alleviates some of the requirements for high availability and large scale clusters. However, you must address the following requirements: Administration host: You must have a computer to prepare the ISO, to create the USB boot drive, and to monitor the installation. Note For the ppc64le platform, the host should prepare the ISO, but does not need to create the USB boot drive. The ISO can be mounted to PowerVM directly. Note ISO is not required for IBM Z(R) installations. CPU Architecture: Installing OpenShift Container Platform on a single node supports x86_64 , arm64 , ppc64le , and s390x CPU architectures. Supported platforms: Installing OpenShift Container Platform on a single node is supported on bare metal and Certified third-party hypervisors . In most cases, you must specify the platform.none: {} parameter in the install-config.yaml configuration file. The following list shows the only exceptions and the corresponding parameter to specify in the install-config.yaml configuration file: Amazon Web Services (AWS), where you use platform=aws Google Cloud Platform (GCP), where you use platform=gcp Microsoft Azure, where you use platform=azure Production-grade server: Installing OpenShift Container Platform on a single node requires a server with sufficient resources to run OpenShift Container Platform services and a production workload. Table 1.1. Minimum resource requirements Profile vCPU Memory Storage Minimum 8 vCPUs 16 GB of RAM 120 GB Note One vCPU equals one physical core. However, if you enable simultaneous multithreading (SMT), or Hyper-Threading, use the following formula to calculate the number of vCPUs that represent one physical core: (threads per core x cores) x sockets = vCPUs Adding Operators during the installation process might increase the minimum resource requirements. The server must have a Baseboard Management Controller (BMC) when booting with virtual media. Note BMC is not supported on IBM Z(R) and IBM Power(R). Networking: The server must have access to the internet or access to a local registry if it is not connected to a routable network. The server must have a DHCP reservation or a static IP address for the Kubernetes API, ingress route, and cluster node domain names. You must configure the DNS to resolve the IP address to each of the following fully qualified domain names (FQDN): Table 1.2. Required DNS records Usage FQDN Description Kubernetes API api.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record. This record must be resolvable by both clients external to the cluster and within the cluster. Internal API api-int.<cluster_name>.<base_domain> Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster. Ingress route *.apps.<cluster_name>.<base_domain> Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by both clients external to the cluster and within the cluster. Important Without persistent IP addresses, communications between the apiserver and etcd might fail. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_a_single_node/preparing-to-install-sno |
function::qsq_start | function::qsq_start Name function::qsq_start - Function to reset the stats for a queue Synopsis Arguments qname the name of the service that finished Description This function resets the statistics counters for the given queue, and restarts tracking from the moment the function was called. This function is also used to create intialize a queue. | [
"qsq_start(qname:string)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-qsq-start |
Managing cloud resources with the OpenStack Dashboard | Managing cloud resources with the OpenStack Dashboard Red Hat OpenStack Platform 17.1 Viewing and configuring the OpenStack Dashboard GUI OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_cloud_resources_with_the_openstack_dashboard/index |
probe::tty.resize | probe::tty.resize Name probe::tty.resize - Called when a terminal resize happens Synopsis tty.resize Values new_row the new row value old_row the old row value name the tty name new_col the new col value old_xpixel the old xpixel old_col the old col value new_xpixel the new xpixel value old_ypixel the old ypixel new_ypixel the new ypixel value | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tty-resize |
Chapter 8. Clustering | Chapter 8. Clustering Dynamic Token Timeout for Corosync The token_coefficient option has been added to the Corosync Cluster Engine . The value of token_coefficient is used only when the nodelist section is specified and contains at least three nodes. In such a situation, the token timeout is computed as follows: This allows the cluster to scale without manually changing the token timeout every time a new node is added. The default value is 650 milliseconds, but it can be set to 0, resulting in effective removal of this feature. This feature allows Corosync to handle dynamic addition and removal of nodes. Corosync Tie Breaker Enhancement The auto_tie_breaker quorum feature of Corosync has been enhanced to provide options for more flexible configuration and modification of tie breaker nodes. Users can now select a list of nodes that will retain a quorum in case of an even cluster split, or choose that a quorum will be retained by the node with the lowest node ID or the highest node ID. Enhancements for Red Hat High Availability For the Red Hat Enterprise Linux 7.1 release, the Red Hat High Availability Add-On supports the following features. For information on these features, see the High Availability Add-On Reference manual. The pcs resource cleanup command can now reset the resource status and failcount for all resources. You can specify a lifetime parameter for the pcs resource move command to indicate a period of time that the resource constraint this command creates will remain in effect. You can use the pcs acl command to set permissions for local users to allow read-only or read-write access to the cluster configuration by using access control lists (ACLs). The pcs constraint command now supports the configuration of specific constraint options in addition to general resource options. The pcs resource create command supports the disabled parameter to indicate that the resource being created is not started automatically. The pcs cluster quorum unblock command prevents the cluster from waiting for all nodes when establishing a quorum. You can configure resource group order with the before and after parameters of the pcs resource create command. You can back up the cluster configuration in a tarball and restore the cluster configuration files on all nodes from backup with the backup and restore options of the pcs config command. | [
"[token + (amount of nodes - 2)] * token_coefficient"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-clustering |
Installing on Alibaba | Installing on Alibaba OpenShift Container Platform 4.14 Installing OpenShift Container Platform on Alibaba Cloud Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_alibaba/index |
Chapter 9. PackageManifest [packages.operators.coreos.com/v1] | Chapter 9. PackageManifest [packages.operators.coreos.com/v1] Description PackageManifest holds information about a package, which is a reference to one (or more) channels under a single package. Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object PackageManifestSpec defines the desired state of PackageManifest status object PackageManifestStatus represents the current status of the PackageManifest 9.1.1. .spec Description PackageManifestSpec defines the desired state of PackageManifest Type object 9.1.2. .status Description PackageManifestStatus represents the current status of the PackageManifest Type object Required catalogSource catalogSourceDisplayName catalogSourcePublisher catalogSourceNamespace packageName channels defaultChannel Property Type Description catalogSource string CatalogSource is the name of the CatalogSource this package belongs to catalogSourceDisplayName string catalogSourceNamespace string CatalogSourceNamespace is the namespace of the owning CatalogSource catalogSourcePublisher string channels array Channels are the declared channels for the package, ala stable or alpha . channels[] object PackageChannel defines a single channel under a package, pointing to a version of that package. defaultChannel string DefaultChannel is, if specified, the name of the default channel for the package. The default channel will be installed if no other channel is explicitly given. If the package has a single channel, then that channel is implicitly the default. packageName string PackageName is the name of the overall package, ala etcd . provider object AppLink defines a link to an application 9.1.3. .status.channels Description Channels are the declared channels for the package, ala stable or alpha . Type array 9.1.4. .status.channels[] Description PackageChannel defines a single channel under a package, pointing to a version of that package. Type object Required name currentCSV entries Property Type Description currentCSV string CurrentCSV defines a reference to the CSV holding the version of this package currently for the channel. currentCSVDesc object CSVDescription defines a description of a CSV entries array Entries lists all CSVs in the channel, with their upgrade edges. entries[] object ChannelEntry defines a member of a package channel. name string Name is the name of the channel, e.g. alpha or stable 9.1.5. .status.channels[].currentCSVDesc Description CSVDescription defines a description of a CSV Type object Property Type Description annotations object (string) apiservicedefinitions APIServiceDefinitions customresourcedefinitions CustomResourceDefinitions description string LongDescription is the CSV's description displayName string DisplayName is the CSV's display name icon array Icon is the CSV's base64 encoded icon icon[] object Icon defines a base64 encoded icon and media type installModes array (InstallMode) InstallModes specify supported installation types keywords array (string) links array links[] object AppLink defines a link to an application maintainers array maintainers[] object Maintainer defines a project maintainer maturity string minKubeVersion string Minimum Kubernetes version for operator installation nativeApis array (GroupVersionKind) provider object AppLink defines a link to an application relatedImages array (string) List of related images version OperatorVersion Version is the CSV's semantic version 9.1.6. .status.channels[].currentCSVDesc.icon Description Icon is the CSV's base64 encoded icon Type array 9.1.7. .status.channels[].currentCSVDesc.icon[] Description Icon defines a base64 encoded icon and media type Type object Property Type Description base64data string mediatype string 9.1.8. .status.channels[].currentCSVDesc.links Description Type array 9.1.9. .status.channels[].currentCSVDesc.links[] Description AppLink defines a link to an application Type object Property Type Description name string url string 9.1.10. .status.channels[].currentCSVDesc.maintainers Description Type array 9.1.11. .status.channels[].currentCSVDesc.maintainers[] Description Maintainer defines a project maintainer Type object Property Type Description email string name string 9.1.12. .status.channels[].currentCSVDesc.provider Description AppLink defines a link to an application Type object Property Type Description name string url string 9.1.13. .status.channels[].entries Description Entries lists all CSVs in the channel, with their upgrade edges. Type array 9.1.14. .status.channels[].entries[] Description ChannelEntry defines a member of a package channel. Type object Required name Property Type Description name string Name is the name of the bundle for this entry. version string Version is the version of the bundle for this entry. 9.1.15. .status.provider Description AppLink defines a link to an application Type object Property Type Description name string url string 9.2. API endpoints The following API endpoints are available: /apis/packages.operators.coreos.com/v1/packagemanifests GET : list objects of kind PackageManifest /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests GET : list objects of kind PackageManifest /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name} GET : read the specified PackageManifest /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name}/icon GET : connect GET requests to icon of PackageManifest 9.2.1. /apis/packages.operators.coreos.com/v1/packagemanifests Table 9.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind PackageManifest Table 9.2. HTTP responses HTTP code Reponse body 200 - OK PackageManifestList schema 9.2.2. /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests Table 9.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.4. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind PackageManifest Table 9.5. HTTP responses HTTP code Reponse body 200 - OK PackageManifestList schema 9.2.3. /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name} Table 9.6. Global path parameters Parameter Type Description name string name of the PackageManifest namespace string object name and auth scope, such as for teams and projects Table 9.7. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read the specified PackageManifest Table 9.8. HTTP responses HTTP code Reponse body 200 - OK PackageManifest schema 9.2.4. /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name}/icon Table 9.9. Global path parameters Parameter Type Description name string name of the PackageManifest namespace string object name and auth scope, such as for teams and projects HTTP method GET Description connect GET requests to icon of PackageManifest Table 9.10. HTTP responses HTTP code Reponse body 200 - OK string | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/operatorhub_apis/packagemanifest-packages-operators-coreos-com-v1 |
Chapter 6. Configuring Satellite Server with external services | Chapter 6. Configuring Satellite Server with external services If you do not want to configure the DNS, DHCP, and TFTP services on Satellite Server, use this section to configure your Satellite Server to work with external DNS, DHCP, and TFTP services. 6.1. Configuring Satellite Server with external DNS You can configure Satellite Server with external DNS. Satellite Server uses the nsupdate utility to update DNS records on the remote server. To make any changes persistent, you must enter the satellite-installer command with the options appropriate for your environment. Prerequisites You must have a configured external DNS server. This guide assumes you have an existing installation. Procedure Copy the /etc/rndc.key file from the external DNS server to Satellite Server: Configure the ownership, permissions, and SELinux context: To test the nsupdate utility, add a host remotely: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dns.yml file: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the DNS service with the appropriate subnets and domain. 6.2. Configuring Satellite Server with external DHCP To configure Satellite Server with external DHCP, you must complete the following procedures: Section 6.2.1, "Configuring an external DHCP server to use with Satellite Server" Section 6.2.2, "Configuring Satellite Server with an external DHCP server" 6.2.1. Configuring an external DHCP server to use with Satellite Server To configure an external DHCP server running Red Hat Enterprise Linux to use with Satellite Server, you must install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages. You must also share the DHCP configuration and lease files with Satellite Server. The example in this procedure uses the distributed Network File System (NFS) protocol to share the DHCP configuration and lease files. Note If you use dnsmasq as an external DHCP server, enable the dhcp-no-override setting. This is required because Satellite creates configuration files on the TFTP server under the grub2/ subdirectory. If the dhcp-no-override setting is disabled, hosts fetch the bootloader and its configuration from the root directory, which might cause an error. Procedure On your Red Hat Enterprise Linux host, install the ISC DHCP Service and Berkeley Internet Name Domain (BIND) utilities packages: Generate a security token: Edit the dhcpd configuration file for all subnets and add the key generated by tsig-keygen . The following is an example: Note that the option routers value is the IP address of your Satellite Server or Capsule Server that you want to use with an external DHCP service. On Satellite Server, define each subnet. Do not set DHCP Capsule for the defined Subnet yet. To prevent conflicts, set up the lease and reservation ranges separately. For example, if the lease range is 192.168.38.10 to 192.168.38.100, in the Satellite web UI define the reservation range as 192.168.38.101 to 192.168.38.250. Configure the firewall for external access to the DHCP server: Make the changes persistent: On Satellite Server, determine the UID and GID of the foreman user: On the DHCP server, create the foreman user and group with the same IDs as determined in a step: To ensure that the configuration files are accessible, restore the read and execute flags: Enable and start the DHCP service: Export the DHCP configuration and lease files using NFS: Create directories for the DHCP configuration and lease files that you want to export using NFS: To create mount points for the created directories, add the following line to the /etc/fstab file: Mount the file systems in /etc/fstab : Ensure the following lines are present in /etc/exports : Note that the IP address that you enter is the Satellite or Capsule IP address that you want to use with an external DHCP service. Reload the NFS server: Configure the firewall for DHCP omapi port 7911: Optional: Configure the firewall for external access to NFS. Clients are configured using NFSv3. Make the changes persistent: 6.2.2. Configuring Satellite Server with an external DHCP server You can configure Satellite Server with an external DHCP server. Prerequisites Ensure that you have configured an external DHCP server and that you have shared the DHCP configuration and lease files with Satellite Server. For more information, see Section 6.2.1, "Configuring an external DHCP server to use with Satellite Server" . Procedure Install the nfs-utils package: Create the DHCP directories for NFS: Change the file owner: Verify communication with the NFS server and the Remote Procedure Call (RPC) communication paths: Add the following lines to the /etc/fstab file: Mount the file systems on /etc/fstab : To verify that the foreman-proxy user can access the files that are shared over the network, display the DHCP configuration and lease files: Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/dhcp.yml file: Associate the DHCP service with the appropriate subnets and domain. 6.3. Configuring Satellite Server with external TFTP You can configure Satellite Server with external TFTP services. Procedure Create the TFTP directory for NFS: In the /etc/fstab file, add the following line: Mount the file systems in /etc/fstab : Enter the satellite-installer command to make the following persistent changes to the /etc/foreman-proxy/settings.d/tftp.yml file: If the TFTP service is running on a different server than the DHCP service, update the tftp_servername setting with the FQDN or IP address of the server that the TFTP service is running on: In the Satellite web UI, navigate to Infrastructure > Capsules . Locate the Satellite Server and select Refresh from the list in the Actions column. Associate the TFTP service with the appropriate subnets and domain. 6.4. Configuring Satellite Server with external IdM DNS When Satellite Server adds a DNS record for a host, it first determines which Capsule is providing DNS for that domain. It then communicates with the Capsule that is configured to provide DNS service for your deployment and adds the record. The hosts are not involved in this process. Therefore, you must install and configure the IdM client on the Satellite or Capsule that is currently configured to provide a DNS service for the domain you want to manage using the IdM server. Satellite Server can be configured to use a Red Hat Identity Management (IdM) server to provide DNS service. For more information about Red Hat Identity Management, see the Linux Domain Identity, Authentication, and Policy Guide . To configure Satellite Server to use a Red Hat Identity Management (IdM) server to provide DNS service, use one of the following procedures: Section 6.4.1, "Configuring dynamic DNS update with GSS-TSIG authentication" Section 6.4.2, "Configuring dynamic DNS update with TSIG authentication" To revert to internal DNS service, use the following procedure: Section 6.4.3, "Reverting to internal DNS service" Note You are not required to use Satellite Server to manage DNS. When you are using the realm enrollment feature of Satellite, where provisioned hosts are enrolled automatically to IdM, the ipa-client-install script creates DNS records for the client. Configuring Satellite Server with external IdM DNS and realm enrollment are mutually exclusive. For more information about configuring realm enrollment, see Section 5.8, "External authentication for provisioned hosts" . 6.4.1. Configuring dynamic DNS update with GSS-TSIG authentication You can configure the IdM server to use the generic security service algorithm for secret key transaction (GSS-TSIG) technology defined in RFC3645 . To configure the IdM server to use the GSS-TSIG technology, you must install the IdM client on the Satellite Server base operating system. Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements for IdM in the Installing Identity Management Guide . You must contact the IdM server administrator to ensure that you obtain an account on the IdM server with permissions to create zones on the IdM server. You should create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with GSS-TSIG authentication, complete the following steps: Creating a Kerberos principal on the IdM server Obtain a Kerberos ticket for the account obtained from the IdM administrator: Create a new Kerberos principal for Satellite Server to use to authenticate on the IdM server: Installing and configuring the idM client On the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment, install the ipa-client package: Configure the IdM client by running the installation script and following the on-screen prompts: Obtain a Kerberos ticket: Remove any preexisting keytab : Obtain the keytab for this system: Note When adding a keytab to a standby system with the same host name as the original system in service, add the r option to prevent generating new credentials and rendering the credentials on the original system invalid. For the dns.keytab file, set the group and owner to foreman-proxy : Optional: To verify that the keytab file is valid, enter the following command: Configuring DNS zones in the IdM web UI Create and configure the zone that you want to manage: Navigate to Network Services > DNS > DNS Zones . Select Add and enter the zone name. For example, example.com . Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Enable Allow PTR sync . Click Save to save the changes. Create and configure the reverse zone: Navigate to Network Services > DNS > DNS Zones . Click Add . Select Reverse zone IP network and add the network address in CIDR format to enable reverse lookups. Click Add and Edit . Click the Settings tab and in the BIND update policy box, add the following to the semi-colon separated list: Set Dynamic update to True . Click Save to save the changes. Configuring the Satellite or Capsule Server that manages the DNS service for the domain Configure your Satellite Server or Capsule Server to connect to your DNS service: For each affected Capsule, update the configuration of that Capsule in the Satellite web UI: In the Satellite web UI, navigate to Infrastructure > Capsules , locate the Satellite Server, and from the list in the Actions column, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and select the domain name. In the Domain tab, ensure DNS Capsule is set to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to None . In the Domains tab, select the domain that you want to manage using the IdM server. In the Capsules tab, ensure Reverse DNS Capsule is set to the Capsule where the subnet is connected. Click Submit to save the changes. 6.4.2. Configuring dynamic DNS update with TSIG authentication You can configure an IdM server to use the secret key transaction authentication for DNS (TSIG) technology that uses the rndc.key key file for authentication. The TSIG protocol is defined in RFC2845 . Prerequisites You must ensure the IdM server is deployed and the host-based firewall is configured correctly. For more information, see Port Requirements in the Linux Domain Identity, Authentication, and Policy Guide . You must obtain root user access on the IdM server. You must confirm whether Satellite Server or Capsule Server is configured to provide DNS service for your deployment. You must configure DNS, DHCP and TFTP services on the base operating system of either the Satellite or Capsule that is managing the DNS service for your deployment. You must create a backup of the answer file. You can use the backup to restore the answer file to its original state if it becomes corrupted. For more information, see Configuring Satellite Server . Procedure To configure dynamic DNS update with TSIG authentication, complete the following steps: Enabling external updates to the DNS zone in the IdM server On the IdM Server, add the following to the top of the /etc/named.conf file: Reload the named service to make the changes take effect: In the IdM web UI, navigate to Network Services > DNS > DNS Zones and click the name of the zone. In the Settings tab, apply the following changes: Add the following in the BIND update policy box: Set Dynamic update to True . Click Update to save the changes. Copy the /etc/rndc.key file from the IdM server to the base operating system of your Satellite Server. Enter the following command: To set the correct ownership, permissions, and SELinux context for the rndc.key file, enter the following command: Assign the foreman-proxy user to the named group manually. Normally, satellite-installer ensures that the foreman-proxy user belongs to the named UNIX group, however, in this scenario Satellite does not manage users and groups, therefore you need to assign the foreman-proxy user to the named group manually. On Satellite Server, enter the following satellite-installer command to configure Satellite to use the external DNS server: Testing external updates to the DNS zone in the IdM server Ensure that the key in the /etc/rndc.key file on Satellite Server is the same key file that is used on the IdM server: On Satellite Server, create a test DNS entry for a host. For example, host test.example.com with an A record of 192.168.25.20 on the IdM server at 192.168.25.1 . On Satellite Server, test the DNS entry: To view the entry in the IdM web UI, navigate to Network Services > DNS > DNS Zones . Click the name of the zone and search for the host by name. If resolved successfully, remove the test DNS entry: Confirm that the DNS entry was removed: The above nslookup command fails and returns the SERVFAIL error message if the record was successfully deleted. 6.4.3. Reverting to internal DNS service You can revert to using Satellite Server and Capsule Server as your DNS providers. You can use a backup of the answer file that was created before configuring external DNS, or you can create a backup of the answer file. For more information about answer files, see Configuring Satellite Server . Procedure On the Satellite or Capsule Server that you want to configure to manage DNS service for the domain, complete the following steps: Configuring Satellite or Capsule as a DNS server If you have created a backup of the answer file before configuring external DNS, restore the answer file and then enter the satellite-installer command: If you do not have a suitable backup of the answer file, create a backup of the answer file now. To configure Satellite or Capsule as DNS server without using an answer file, enter the following satellite-installer command on Satellite or Capsule: For more information, see Configuring DNS, DHCP, and TFTP on Capsule Server . After you run the satellite-installer command to make any changes to your Capsule configuration, you must update the configuration of each affected Capsule in the Satellite web UI. Updating the configuration in the Satellite web UI In the Satellite web UI, navigate to Infrastructure > Capsules . For each Capsule that you want to update, from the Actions list, select Refresh . Configure the domain: In the Satellite web UI, navigate to Infrastructure > Domains and click the domain name that you want to configure. In the Domain tab, set DNS Capsule to the Capsule where the subnet is connected. Configure the subnet: In the Satellite web UI, navigate to Infrastructure > Subnets and select the subnet name. In the Subnet tab, set IPAM to DHCP or Internal DB . In the Domains tab, select the domain that you want to manage using Satellite or Capsule. In the Capsules tab, set Reverse DNS Capsule to the Capsule where the subnet is connected. Click Submit to save the changes. | [
"scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key",
"restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key",
"echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key",
"dnf install dhcp-server bind-utils",
"tsig-keygen -a hmac-md5 omapi_key",
"cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret \" My_Secret \"; }; omapi-key omapi_key;",
"firewall-cmd --add-service dhcp",
"firewall-cmd --runtime-to-permanent",
"id -u foreman 993 id -g foreman 990",
"groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman",
"chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf",
"systemctl enable --now dhcpd",
"dnf install nfs-utils systemctl enable --now nfs-server",
"mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp",
"/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0",
"mount -a",
"/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)",
"exportfs -rva",
"firewall-cmd --add-port=7911/tcp",
"firewall-cmd --add-service mountd --add-service nfs --add-service rpc-bind --zone public",
"firewall-cmd --runtime-to-permanent",
"satellite-maintain packages install nfs-utils",
"mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd",
"chown -R foreman-proxy /mnt/nfs",
"showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN",
"DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0",
"mount -a",
"su foreman-proxy -s /bin/bash cat /mnt/nfs/etc/dhcp/dhcpd.conf cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases exit",
"satellite-installer --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-dhcp-server= My_DHCP_Server_FQDN --foreman-proxy-dhcp=true --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret= My_Secret --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911",
"mkdir -p /mnt/nfs/var/lib/tftpboot",
"TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0",
"mount -a",
"satellite-installer --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot --foreman-proxy-tftp=true",
"satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN",
"kinit idm_user",
"ipa service-add capsule/satellite.example.com",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"kinit admin",
"rm /etc/foreman-proxy/dns.keytab",
"ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab",
"chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab",
"kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]",
"grant capsule\\047 [email protected] wildcard * ANY;",
"grant capsule\\047 [email protected] wildcard * ANY;",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns=true",
"######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################",
"systemctl reload named",
"grant \"rndc-key\" zonesub ANY;",
"scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key",
"restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key",
"usermod -a -G named foreman-proxy",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-dns-ttl=86400 --foreman-proxy-dns=true --foreman-proxy-keyfile=/etc/rndc.key",
"key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };",
"echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1 Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20",
"echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1",
"satellite-installer",
"satellite-installer --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\" --foreman-proxy-dns=true"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_connected_network_environment/configuring-external-services |
Chapter 83. trust | Chapter 83. trust This chapter describes the commands under the trust command. 83.1. trust create Create new trust Usage: Table 83.1. Positional arguments Value Summary <trustor-user> User that is delegating authorization (name or id) <trustee-user> User that is assuming authorization (name or id) Table 83.2. Command arguments Value Summary -h, --help Show this help message and exit --project <project> Project being delegated (name or id) (required) --role <role> Roles to authorize (name or id) (repeat option to set multiple values, required) --impersonate Tokens generated from the trust will represent <trustor> (defaults to False) --expiration <expiration> Sets an expiration date for the trust (format of yyyy- mm-ddTHH:MM:SS) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --trustor-domain <trustor-domain> Domain that contains <trustor> (name or id) --trustee-domain <trustee-domain> Domain that contains <trustee> (name or id) Table 83.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 83.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 83.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 83.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 83.2. trust delete Delete trust(s) Usage: Table 83.7. Positional arguments Value Summary <trust> Trust(s) to delete Table 83.8. Command arguments Value Summary -h, --help Show this help message and exit 83.3. trust list List trusts Usage: Table 83.9. Command arguments Value Summary -h, --help Show this help message and exit Table 83.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 83.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 83.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 83.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 83.4. trust show Display trust details Usage: Table 83.14. Positional arguments Value Summary <trust> Trust to display Table 83.15. Command arguments Value Summary -h, --help Show this help message and exit Table 83.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 83.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 83.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 83.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack trust create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --project <project> --role <role> [--impersonate] [--expiration <expiration>] [--project-domain <project-domain>] [--trustor-domain <trustor-domain>] [--trustee-domain <trustee-domain>] <trustor-user> <trustee-user>",
"openstack trust delete [-h] <trust> [<trust> ...]",
"openstack trust list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending]",
"openstack trust show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <trust>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/trust |
Chapter 11. Consistent Network Device Naming | Chapter 11. Consistent Network Device Naming Red Hat Enterprise Linux provides methods for consistent and predictable network device naming for network interfaces. These features change the name of network interfaces on a system in order to make locating and differentiating the interfaces easier. Traditionally, network interfaces in Linux are enumerated as eth[0123...]s0 , but these names do not necessarily correspond to actual labels on the chassis. Modern server platforms with multiple network adapters can encounter non-deterministic and counter-intuitive naming of these interfaces. This affects both network adapters embedded on the motherboard ( Lan-on-Motherboard , or LOM ) and add-in (single and multiport) adapters. In Red Hat Enterprise Linux, udev supports a number of different naming schemes. The default is to assign fixed names based on firmware, topology, and location information. This has the advantage that the names are fully automatic, fully predictable, that they stay fixed even if hardware is added or removed (no re-enumeration takes place), and that broken hardware can be replaced seamlessly. The disadvantage is that they are sometimes harder to read than the eth or wla names traditionally used. For example: enp5s0 . Warning Red Hat does not support systems with consistent device naming disabled. For further details, see Is it safe to set net.ifnames=0? 11.1. Naming Schemes Hierarchy By default, systemd will name interfaces using the following policy to apply the supported naming schemes: Scheme 1: Names incorporating Firmware or BIOS provided index numbers for on-board devices (example: eno1 ), are applied if that information from the firmware or BIOS is applicable and available, else falling back to scheme 2. Scheme 2: Names incorporating Firmware or BIOS provided PCI Express hotplug slot index numbers (example: ens1 ) are applied if that information from the firmware or BIOS is applicable and available, else falling back to scheme 3. Scheme 3: Names incorporating physical location of the connector of the hardware (example: enp2s0 ), are applied if applicable, else falling directly back to scheme 5 in all other cases. Scheme 4: Names incorporating interface's MAC address (example: enx78e7d1ea46da ), is not used by default, but is available if the user chooses. Scheme 5: The traditional unpredictable kernel naming scheme, is used if all other methods fail (example: eth0 ). This policy, the procedure outlined above, is the default. If the system has biosdevname enabled, it will be used. Note that enabling biosdevname requires passing biosdevname=1 as a kernel command-line parameter, except in the case of a Dell system, where biosdevname will be used by default as long as it is installed. If the user has added udev rules which change the name of the kernel devices, those rules will take precedence. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/ch-Consistent_Network_Device_Naming |
Chapter 7. Installation configuration parameters for Azure | Chapter 7. Installation configuration parameters for Azure Before you deploy an OpenShift Container Platform cluster on Microsoft Azure, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 7.1. Available installation configuration parameters for Azure The following tables specify the required, optional, and Azure-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 7.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Configures the IPv4 join subnet that is used internally by ovn-kubernetes . This subnet must not overlap with any other subnet that OpenShift Container Platform is using, including the node network. The size of the subnet must be larger than the number of nodes. You cannot change the value after installation. An IP network block in CIDR notation. The default value is 100.64.0.0/16 . 7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal , External , or Mixed . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . To deploy a cluster where the API and the ingress server have different publishing strategies, set publish to Mixed and use the operatorPublishingStrategy parameter. The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . + Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 7.1.4. Additional Azure configuration parameters Additional Azure configuration parameters are described in the following table. Note By default, if you specify availability zones in the install-config.yaml file, the installation program distributes the control plane machines and the compute machines across these availability zones within a region . To ensure high availability for your cluster, select a region with at least three availability zones. If your region contains fewer than three availability zones, the installation program places more than one control plane machine in the available zones. Table 7.4. Additional Azure parameters Parameter Description Values Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. standard_LRS , premium_LRS , or standardSSD_LRS . The default is premium_LRS . Enables the use of Azure ultra disks for persistent storage on compute nodes. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot compute machines. You can override the default behavior by using a custom RHCOS image that is available from the Azure Marketplace. The installation program uses this image for compute machines only. String. The name of the image publisher. The name of Azure Marketplace offer that is associated with the custom RHCOS image. If you use compute.platform.azure.osImage.publisher , this field is required. String. The name of the image offer. An instance of the Azure Marketplace offer. If you use compute.platform.azure.osImage.publisher , this field is required. String. The SKU of the image offer. The version number of the image SKU. If you use compute.platform.azure.osImage.publisher , this field is required. String. The version of the image to use. Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . Defines the Azure instance type for compute machines. String The availability zones where the installation program creates compute machines. String list Enables confidential VMs or trusted launch for compute nodes. This option is not enabled by default. ConfidentialVM or TrustedLaunch . Enables secure boot on compute nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables the virtualized Trusted Platform Module (vTPM) feature on compute nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables secure boot on compute nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on compute nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the encryption of the virtual machine guest state for compute nodes. This parameter can only be used if you use Confidential VMs. VMGuestStateOnly is the only supported value. Enables confidential VMs or trusted launch for control plane nodes. This option is not enabled by default. ConfidentialVM or TrustedLaunch . Enables secure boot on control plane nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on control plane nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables secure boot on control plane nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on control plane nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the encryption of the virtual machine guest state for control plane nodes. This parameter can only be used if you use Confidential VMs. VMGuestStateOnly is the only supported value. Defines the Azure instance type for control plane machines. String The availability zones where the installation program creates control plane machines. String list Enables confidential VMs or trusted launch for all nodes. This option is not enabled by default. ConfidentialVM or TrustedLaunch . Enables secure boot on all nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables the virtualized Trusted Platform Module (vTPM) feature on all nodes if you are using confidential VMs. Enabled or Disabled . The default is Disabled . Enables secure boot on all nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the vTPM feature on all nodes if you are using trusted launch. Enabled or Disabled . The default is Disabled . Enables the encryption of the virtual machine guest state for all nodes. This parameter can only be used if you use Confidential VMs. VMGuestStateOnly is the only supported value. Enables host-level encryption for compute machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached, and un-managed disks on the VM host. This parameter is not a prerequisite for user-managed server-side encryption. true or false . The default is false . The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example, production_disk_encryption_set . The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. To avoid deleting your Azure encryption key when the cluster is destroyed, this resource group must be different from the resource group where you install the cluster. This value is necessary only if you intend to install the cluster with user-managed disk encryption. String, for example, production_encryption_resource_group . Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt compute machines. String, in the format 00000000-0000-0000-0000-000000000000 . The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 128 . Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane and compute machines. You can override the default behavior by using a custom RHCOS image that is available from the Azure Marketplace. The installation program uses this image for both types of machines. String. The name of the image publisher. The name of Azure Marketplace offer that is associated with the custom RHCOS image. If you use platform.azure.defaultMachinePlatform.osImage.publisher , this field is required. String. The name of the image offer. An instance of the Azure Marketplace offer. If you use platform.azure.defaultMachinePlatform.osImage.publisher , this field is required. String. The SKU of the image offer. The version number of the image SKU. If you use platform.azure.defaultMachinePlatform.osImage.publisher , this field is required. String. The version of the image to use. The Azure instance type for control plane and compute machines. The Azure instance type. The availability zones where the installation program creates compute and control plane machines. String list. Enables host-level encryption for control plane machines. You can enable this encryption alongside user-managed server-side encryption. This feature encrypts temporary, ephemeral, cached and un-managed disks on the VM host. This is not a prerequisite for user-managed server-side encryption. true or false . The default is false . The name of the Azure resource group that contains the disk encryption set from the installation prerequisites. This resource group should be different from the resource group where you install the cluster to avoid deleting your Azure encryption key when the cluster is destroyed. This value is only necessary if you intend to install the cluster with user-managed disk encryption. String, for example production_encryption_resource_group . The name of the disk encryption set that contains the encryption key from the installation prerequisites. String, for example production_disk_encryption_set . Defines the Azure subscription of the disk encryption set where the disk encryption set resides. This secondary disk encryption set is used to encrypt control plane machines. String, in the format 00000000-0000-0000-0000-000000000000 . The Azure disk size for the VM. Integer that represents the size of the disk in GB. The default is 1024 . Defines the type of disk. premium_LRS or standardSSD_LRS . The default is premium_LRS . Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by using a custom RHCOS image that is available from the Azure Marketplace. The installation program uses this image for control plane machines only. String. The name of the image publisher. The name of Azure Marketplace offer that is associated with the custom RHCOS image. If you use controlPlane.platform.azure.osImage.publisher , this field is required. String. The name of the image offer. An instance of the Azure Marketplace offer. If you use controlPlane.platform.azure.osImage.publisher , this field is required. String. The SKU of the image offer. The version number of the image SKU. If you use controlPlane.platform.azure.osImage.publisher , this field is required. String. The version of the image to use. Enables the use of Azure ultra disks for persistent storage on control plane machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. If instance type of control plane machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Accelerated or Basic . The name of the resource group that contains the DNS zone for your base domain. String, for example production_cluster . The name of an already existing resource group to install your cluster to. This resource group must be empty and only used for this specific cluster; the cluster components assume ownership of all resources in the resource group. If you limit the service principal scope of the installation program to this resource group, you must ensure all other resources used by the installation program in your environment have the necessary permissions, such as the public DNS zone and virtual network. Destroying the cluster by using the installation program deletes this resource group. String, for example existing_resource_group . The outbound routing strategy used to connect your cluster to the internet. If you are using user-defined routing, you must have pre-existing networking available where the outbound routing has already been configured prior to installing a cluster. The installation program is not responsible for configuring user-defined routing. If you specify the NatGateway routing strategy, the installation program will only create one NAT gateway. If you specify the NatGateway routing strategy, your account must have the Microsoft.Network/natGateways/read and Microsoft.Network/natGateways/write permissions. Important NatGateway is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . LoadBalancer , UserDefinedRouting , or NatGateway . The default is LoadBalancer . The name of the Azure region that hosts your cluster. Any valid region name, such as centralus . List of availability zones to place machines in. For high availability, specify at least two zones. List of zones, for example ["1", "2", "3"] . Specifies the name of the key vault that contains the encryption key that is used to encrypt Azure storage. String. Specifies the name of the user-managed encryption key that is used to encrypt Azure storage. String. Specifies the name of the resource group that contains the key vault and managed identity. String. Specifies the subscription ID that is associated with the key vault. String, in the format 00000000-0000-0000-0000-000000000000 . Specifies the name of the user-assigned managed identity that resides in the resource group with the key vault and has access to the user-managed key. String. Enables the use of Azure ultra disks for persistent storage on control plane and compute machines. This requires that your Azure region and zone have ultra disks available. Enabled , Disabled . The default is Disabled . The name of the resource group that contains the existing VNet that you want to deploy your cluster to. This name cannot be the same as the platform.azure.baseDomainResourceGroupName . String. The name of the existing VNet that you want to deploy your cluster to. String. The name of the existing subnet in your VNet that you want to deploy your control plane machines to. Valid CIDR, for example 10.0.0.0/16 . The name of the existing subnet in your VNet that you want to deploy your compute machines to. Valid CIDR, for example 10.0.0.0/16 . The name of the Azure cloud environment that is used to configure the Azure SDK with the appropriate Azure API endpoints. If empty, the default value AzurePublicCloud is used. Any valid cloud environment, such as AzurePublicCloud or AzureUSGovernmentCloud . Enables accelerated networking. Accelerated networking enables single root I/O virtualization (SR-IOV) to a VM, improving its networking performance. Accelerated or Basic . If instance type of control plane and compute machines support Accelerated networking, by default, the installer enables Accelerated networking, otherwise the default networking type is Basic . Determines whether the load balancers that service the API are public or private. Set this parameter to Internal to prevent the API server from being accessible outside of your VNet. Set this parameter to External to make the API server accessible outside of your VNet. If you set this parameter, you must set the publish parameter to Mixed . External or Internal . The default value is External . Determines whether the DNS resources that the cluster creates for ingress traffic are publicly visible. Set this parameter to Internal to prevent the ingress VIP from being publicly accessible. Set this parameter to External to make the ingress VIP publicly accessible. If you set this parameter, you must set the publish parameter to Mixed . External or Internal . The default value is External . Note You cannot customize Azure Availability Zones or Use tags to organize your Azure resources with an Azure cluster. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"networking: ovnKubernetesConfig: ipv4: internalJoinSubnet:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"compute: platform: azure: encryptionAtHost:",
"compute: platform: azure: osDisk: diskSizeGB:",
"compute: platform: azure: osDisk: diskType:",
"compute: platform: azure: ultraSSDCapability:",
"compute: platform: azure: osDisk: diskEncryptionSet: resourceGroup:",
"compute: platform: azure: osDisk: diskEncryptionSet: name:",
"compute: platform: azure: osDisk: diskEncryptionSet: subscriptionId:",
"compute: platform: azure: osImage: publisher:",
"compute: platform: azure: osImage: offer:",
"compute: platform: azure: osImage: sku:",
"compute: platform: azure: osImage: version:",
"compute: platform: azure: vmNetworkingType:",
"compute: platform: azure: type:",
"compute: platform: azure: zones:",
"compute: platform: azure: settings: securityType:",
"compute: platform: azure: settings: confidentialVM: uefiSettings: secureBoot:",
"compute: platform: azure: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"compute: platform: azure: settings: trustedLaunch: uefiSettings: secureBoot:",
"compute: platform: azure: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"compute: platform: azure: osDisk: securityProfile: securityEncryptionType:",
"controlPlane: platform: azure: settings: securityType:",
"controlPlane: platform: azure: settings: confidentialVM: uefiSettings: secureBoot:",
"controlPlane: platform: azure: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"controlPlane: platform: azure: settings: trustedLaunch: uefiSettings: secureBoot:",
"controlPlane: platform: azure: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"controlPlane: platform: azure: osDisk: securityProfile: securityEncryptionType:",
"controlPlane: platform: azure: type:",
"controlPlane: platform: azure: zones:",
"platform: azure: defaultMachinePlatform: settings: securityType:",
"platform: azure: defaultMachinePlatform: settings: confidentialVM: uefiSettings: secureBoot:",
"platform: azure: defaultMachinePlatform: settings: confidentialVM: uefiSettings: virtualizedTrustedPlatformModule:",
"platform: azure: defaultMachinePlatform: settings: trustedLaunch: uefiSettings: secureBoot:",
"platform: azure: defaultMachinePlatform: settings: trustedLaunch: uefiSettings: virtualizedTrustedPlatformModule:",
"platform: azure: defaultMachinePlatform: osDisk: securityProfile: securityEncryptionType:",
"platform: azure: defaultMachinePlatform: encryptionAtHost:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: name:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: resourceGroup:",
"platform: azure: defaultMachinePlatform: osDisk: diskEncryptionSet: subscriptionId:",
"platform: azure: defaultMachinePlatform: osDisk: diskSizeGB:",
"platform: azure: defaultMachinePlatform: osDisk: diskType:",
"platform: azure: defaultMachinePlatform: osImage: publisher:",
"platform: azure: defaultMachinePlatform: osImage: offer:",
"platform: azure: defaultMachinePlatform: osImage: sku:",
"platform: azure: defaultMachinePlatform: osImage: version:",
"platform: azure: defaultMachinePlatform: type:",
"platform: azure: defaultMachinePlatform: zones:",
"controlPlane: platform: azure: encryptionAtHost:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: resourceGroup:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: name:",
"controlPlane: platform: azure: osDisk: diskEncryptionSet: subscriptionId:",
"controlPlane: platform: azure: osDisk: diskSizeGB:",
"controlPlane: platform: azure: osDisk: diskType:",
"controlPlane: platform: azure: osImage: publisher:",
"controlPlane: platform: azure: osImage: offer:",
"controlPlane: platform: azure: osImage: sku:",
"controlPlane: platform: azure: osImage: version:",
"controlPlane: platform: azure: ultraSSDCapability:",
"controlPlane: platform: azure: vmNetworkingType:",
"platform: azure: baseDomainResourceGroupName:",
"platform: azure: resourceGroupName:",
"platform: azure: outboundType:",
"platform: azure: region:",
"platform: azure: zone:",
"platform: azure: customerManagedKey: keyVault: name:",
"platform: azure: customerManagedKey: keyVault: keyName:",
"platform: azure: customerManagedKey: keyVault: resourceGroup:",
"platform: azure: customerManagedKey: keyVault: subscriptionId:",
"platform: azure: customerManagedKey: userAssignedIdentityKey:",
"platform: azure: defaultMachinePlatform: ultraSSDCapability:",
"platform: azure: networkResourceGroupName:",
"platform: azure: virtualNetwork:",
"platform: azure: controlPlaneSubnet:",
"platform: azure: computeSubnet:",
"platform: azure: cloudName:",
"platform: azure: defaultMachinePlatform: vmNetworkingType:",
"operatorPublishingStrategy: apiserver:",
"operatorPublishingStrategy: ingress:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_azure/installation-config-parameters-azure |
Chapter 2. Installing Capsule Server | Chapter 2. Installing Capsule Server Before you install Capsule Server, you must ensure that your environment meets the requirements for installation. For more information, see Preparing your Environment for Installation . 2.1. Registering to Satellite Server Use this procedure to register the base operating system on which you want to install Capsule Server to Satellite Server. Red Hat subscription manifest prerequisites On Satellite Server, a manifest must be installed and it must contain the appropriate repositories for the organization you want Capsule to belong to. The manifest must contain repositories for the base operating system on which you want to install Capsule, as well as any clients that you want to connect to Capsule. The repositories must be synchronized. For more information on manifests and repositories, see Managing Red Hat Subscriptions in Managing content . Proxy and network prerequisites The Satellite Server base operating system must be able to resolve the host name of the Capsule base operating system and vice versa. Ensure HTTPS connection using client certificate authentication is possible between Capsule Server and Satellite Server. HTTP proxies between Capsule Server and Satellite Server are not supported. You must configure the host and network-based firewalls accordingly. For more information, see Port and firewall requirements in Installing Capsule Server . You can register hosts with Satellite using the host registration feature in the Satellite web UI, Hammer CLI, or the Satellite API. For more information, see Registering hosts and setting up host integration in Managing hosts . Procedure In the Satellite web UI, navigate to Hosts > Register Host . From the Activation Keys list, select the activation keys to assign to your host. Click Generate to create the registration command. Click on the files icon to copy the command to your clipboard. Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. CLI procedure Generate the host registration command using the Hammer CLI: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. API procedure Generate the host registration command using the Satellite API: If your hosts do not trust the SSL certificate of Satellite Server, you can disable SSL validation by adding the --insecure flag to the registration command. Use an activation key to simplify specifying the environments. For more information, see Managing Activation Keys in Managing content . To enter a password as a command line argument, use username:password syntax. Keep in mind this can save the password in the shell history. Alternatively, you can use a temporary personal access token instead of a password. To generate a token in the Satellite web UI, navigate to My Account > Personal Access Tokens . Connect to your host using SSH and run the registration command. Check the /etc/yum.repos.d/redhat.repo file and ensure that the appropriate repositories have been enabled. 2.2. Configuring repositories Prerequisite If you are installing Capsule Server as a virtual machine hosted on Red Hat Virtualization, you must also enable the Red Hat Common repository and then install Red Hat Virtualization guest agents and drivers. For more information, see Installing the Guest Agents and Drivers on Red Hat Enterprise Linux in the Virtual Machine Management Guide . Procedure Select the operating system and version you are installing on: Red Hat Enterprise Linux 9 Red Hat Enterprise Linux 8 2.2.1. Red Hat Enterprise Linux 9 Disable all repositories: Enable the following repositories: Verification Verify that the required repositories are enabled: 2.2.2. Red Hat Enterprise Linux 8 Disable all repositories: Enable the following repositories: Enable the module: Verification Verify that the required repositories are enabled: Additional Resources If there is any warning about conflicts with Ruby or PostgreSQL while enabling satellite-capsule:el8 module, see Troubleshooting DNF modules . For more information about modules and lifecycle streams on Red Hat Enterprise Linux 8, see Red Hat Enterprise Linux 8 Application Streams Lifecycle . 2.3. Optional: Using fapolicyd on Capsule Server By enabling fapolicyd on your Satellite Server, you can provide an additional layer of security by monitoring and controlling access to files and directories. The fapolicyd daemon uses the RPM database as a repository of trusted binaries and scripts. You can turn on or off the fapolicyd on your Satellite Server or Capsule Server at any point. 2.3.1. Installing fapolicyd on Capsule Server You can install fapolicyd along with Capsule Server or can be installed on an existing Capsule Server. If you are installing fapolicyd along with the new Capsule Server, the installation process will detect the fapolicyd in your Red Hat Enterprise Linux host and deploy the Capsule Server rules automatically. Prerequisites Ensure your host has access to the BaseOS repositories of Red Hat Enterprise Linux. Procedure For a new installation, install fapolicyd: For an existing installation, install fapolicyd using satellite-maintain packages install: Start the fapolicyd service: Verification Verify that the fapolicyd service is running correctly: New Satellite Server or Capsule Server installations In case of new Satellite Server or Capsule Server installation, follow the standard installation procedures after installing and enabling fapolicyd on your Red Hat Enterprise Linux host. Additional resources For more information on fapolicyd, see Blocking and allowing applications using fapolicyd in Red Hat Enterprise Linux 9 Security hardening or Blocking and allowing applications using fapolicyd in Red Hat Enterprise Linux 8 Security hardening . 2.4. Installing Capsule Server packages Before installing Capsule Server packages, you must update all packages that are installed on the base operating system. Procedure To install Capsule Server, complete the following steps: Update all packages: Install the Satellite Server packages: 2.5. Configuring Capsule Server with SSL certificates Red Hat Satellite uses SSL certificates to enable encrypted communications between Satellite Server, external Capsule Servers, and all hosts. Depending on the requirements of your organization, you must configure your Capsule Server with a default or custom certificate. If you use a default SSL certificate, you must also configure each external Capsule Server with a distinct default SSL certificate. For more information, see Section 2.5.1, "Configuring Capsule Server with a default SSL certificate" . If you use a custom SSL certificate, you must also configure each external Capsule Server with a distinct custom SSL certificate. For more information, see Section 2.5.2, "Configuring Capsule Server with a custom SSL certificate" . 2.5.1. Configuring Capsule Server with a default SSL certificate Use this section to configure Capsule Server with an SSL certificate that is signed by Satellite Server default Certificate Authority (CA). Prerequisites Capsule Server is registered to Satellite Server. For more information, see Section 2.1, "Registering to Satellite Server" . Capsule Server packages are installed. For more information, see Section 2.4, "Installing Capsule Server packages" . Procedure On Satellite Server, to store all the source certificate files for your Capsule Server, create a directory that is accessible only to the root user, for example /root/capsule_cert : On Satellite Server, generate the /root/capsule_cert/ capsule.example.com -certs.tar certificate archive for your Capsule Server: Retain a copy of the satellite-installer command that the capsule-certs-generate command returns for deploying the certificate to your Capsule Server. Example output of capsule-certs-generate On Satellite Server, copy the certificate archive file to your Capsule Server: On Capsule Server, to deploy the certificate, enter the satellite-installer command that the capsule-certs-generate command returns. When network connections or ports to Satellite are not yet open, you can set the --foreman-proxy-register-in-foreman option to false to prevent Capsule from attempting to connect to Satellite and reporting errors. Run the installer again with this option set to true when the network and firewalls are correctly configured. Important Do not delete the certificate archive file after you deploy the certificate. It is required, for example, when upgrading Capsule Server. 2.5.2. Configuring Capsule Server with a custom SSL certificate If you configure Satellite Server to use a custom SSL certificate, you must also configure each of your external Capsule Servers with a distinct custom SSL certificate. To configure your Capsule Server with a custom certificate, complete the following procedures on each Capsule Server: Section 2.5.2.1, "Creating a custom SSL certificate for Capsule Server" Section 2.5.2.2, "Deploying a custom SSL certificate to Capsule Server" Section 2.5.2.3, "Deploying a custom SSL certificate to hosts" 2.5.2.1. Creating a custom SSL certificate for Capsule Server On Satellite Server, create a custom certificate for your Capsule Server. If you already have a custom SSL certificate for Capsule Server, skip this procedure. Procedure To store all the source certificate files, create a directory that is accessible only to the root user: Create a private key with which to sign the certificate signing request (CSR). Note that the private key must be unencrypted. If you use a password-protected private key, remove the private key password. If you already have a private key for this Capsule Server, skip this step. Create the /root/capsule_cert/openssl.cnf configuration file for the CSR and include the following content: Optional: If you want to add Distinguished Name (DN) details to the CSR, add the following information to the [ req_distinguished_name ] section: 1 Two letter code 2 Full name 3 Full name (example: New York) 4 Division responsible for the certificate (example: IT department) Generate CSR: 1 Path to the private key 2 Path to the configuration file 3 Path to the CSR to generate Send the certificate signing request to the certificate authority (CA). The same CA must sign certificates for Satellite Server and Capsule Server. When you submit the request, specify the lifespan of the certificate. The method for sending the certificate request varies, so consult the CA for the preferred method. In response to the request, you can expect to receive a CA bundle and a signed certificate, in separate files. 2.5.2.2. Deploying a custom SSL certificate to Capsule Server Use this procedure to configure your Capsule Server with a custom SSL certificate signed by a Certificate Authority. The satellite-installer command, which the capsule-certs-generate command returns, is unique to each Capsule Server. Do not use the same command on more than one Capsule Server. Prerequisites Satellite Server is configured with a custom certificate. For more information, see Configuring Satellite Server with a Custom SSL Certificate in Installing Satellite Server in a connected network environment . Capsule Server is registered to Satellite Server. For more information, see Section 2.1, "Registering to Satellite Server" . Capsule Server packages are installed. For more information, see Section 2.4, "Installing Capsule Server packages" . Procedure On your Satellite Server, generate a certificate bundle: 1 Path to Capsule Server certificate file that is signed by a Certificate Authority. 2 Path to the private key that was used to sign Capsule Server certificate. 3 Path to the Certificate Authority bundle. Retain a copy of the satellite-installer command that the capsule-certs-generate command returns for deploying the certificate to your Capsule Server. Example output of capsule-certs-generate On your Satellite Server, copy the certificate archive file to your Capsule Server: On your Capsule Server, to deploy the certificate, enter the satellite-installer command that the capsule-certs-generate command returns. If network connections or ports to Satellite are not yet open, you can set the --foreman-proxy-register-in-foreman option to false to prevent Capsule from attempting to connect to Satellite and reporting errors. Run the installer again with this option set to true when the network and firewalls are correctly configured. Important Do not delete the certificate archive file after you deploy the certificate. It is required, for example, when upgrading Capsule Server. 2.5.2.3. Deploying a custom SSL certificate to hosts After you configure Satellite to use a custom SSL certificate, you must deploy the certificate to hosts registered to Satellite. Procedure Update the SSL certificate on each host: 2.6. Resetting custom SSL certificate to default self-signed certificate on Capsule Server To reset the custom SSL certificate to default self-signed certificate on your Capsule Server, you must re-register your Capsule Server through Global Registration . For more information, see Registering hosts by using global registration in Managing hosts . Verification In the Satellite web UI, navigate to Infrastructure > Capsules and select any Capsule Server. On the Overview tab, click Refresh features . Additional Resources Reset custom SSL certificate to default self-signed certificate on Satellite Server in Installing Satellite Server in a connected network environment . Reset custom SSL certificate to default self-signed certificate on hosts in Managing hosts . 2.7. Assigning the correct organization and location to Capsule Server in the Satellite web UI After installing Capsule Server packages, if there is more than one organization or location, you must assign the correct organization and location to Capsule to make Capsule visible in the Satellite web UI. Note Assigning a Capsule to the same location as your Satellite Server with an embedded Capsule prevents Red Hat Insights from uploading the Insights inventory. To enable the inventory upload, synchronize SSH keys for both Capsules. Procedure Log into the Satellite web UI. From the Organization list in the upper-left of the screen, select Any Organization . From the Location list in the upper-left of the screen, select Any Location . In the Satellite web UI, navigate to Hosts > All Hosts and select Capsule Server. From the Select Actions list, select Assign Organization . From the Organization list, select the organization where you want to assign this Capsule. Click Fix Organization on Mismatch . Click Submit . Select Capsule Server. From the Select Actions list, select Assign Location . From the Location list, select the location where you want to assign this Capsule. Click Fix Location on Mismatch . Click Submit . In the Satellite web UI, navigate to Administer > Organizations and click the organization to which you have assigned Capsule. Click Capsules tab and ensure that Capsule Server is listed under the Selected items list, then click Submit . In the Satellite web UI, navigate to Administer > Locations and click the location to which you have assigned Capsule. Click Capsules tab and ensure that Capsule Server is listed under the Selected items list, then click Submit . Verification Optionally, you can verify if Capsule Server is correctly listed in the Satellite web UI. Select the organization from the Organization list. Select the location from the Location list. In the Satellite web UI, navigate to Hosts > All Hosts . In the Satellite web UI, navigate to Infrastructure > Capsules . | [
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \"",
"hammer host-registration generate-command --activation-keys \" My_Activation_Key \" --insecure true",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"] }}'",
"curl -X POST https://satellite.example.com/api/registration_commands --user \" My_User_Name \" -H 'Content-Type: application/json' -d '{ \"registration_command\": { \"activation_keys\": [\" My_Activation_Key_1 , My_Activation_Key_2 \"], \"insecure\": true }}'",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-9-for-x86_64-baseos-rpms --enable=rhel-9-for-x86_64-appstream-rpms --enable=satellite-capsule-6.16-for-rhel-9-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-9-x86_64-rpms",
"dnf repolist enabled",
"subscription-manager repos --disable \"*\"",
"subscription-manager repos --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms --enable=satellite-capsule-6.16-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.16-for-rhel-8-x86_64-rpms",
"dnf module enable satellite-capsule:el8",
"dnf repolist enabled",
"dnf install fapolicyd",
"satellite-maintain packages install fapolicyd",
"systemctl enable --now fapolicyd",
"systemctl status fapolicyd",
"dnf upgrade",
"dnf install satellite-capsule",
"mkdir /root/ capsule_cert",
"capsule-certs-generate --foreman-proxy-fqdn capsule.example.com --certs-tar /root/capsule_cert/ capsule.example.com -certs.tar",
"output omitted satellite-installer --scenario capsule --certs-tar-file \"/root/capsule_cert/ capsule.example.com -certs.tar\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \"https:// satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --foreman-proxy-oauth-consumer-key \" s97QxvUAgFNAQZNGg4F9zLq2biDsxM7f \" --foreman-proxy-oauth-consumer-secret \" 6bpzAdMpRAfYaVZtaepYetomgBVQ6ehY \"",
"scp /root/capsule_cert/ capsule.example.com -certs.tar root@ capsule.example.com :/root/ capsule.example.com -certs.tar",
"mkdir /root/capsule_cert",
"openssl genrsa -out /root/capsule_cert/capsule_cert_key.pem 4096",
"[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] commonName = capsule.example.com [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [ alt_names ] DNS.1 = capsule.example.com",
"[req_distinguished_name] CN = capsule.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4",
"openssl req -new -key /root/capsule_cert/capsule_cert_key.pem \\ 1 -config /root/capsule_cert/openssl.cnf \\ 2 -out /root/capsule_cert/capsule_cert_csr.pem 3",
"capsule-certs-generate --foreman-proxy-fqdn capsule.example.com --certs-tar ~/ capsule.example.com -certs.tar --server-cert /root/ capsule_cert/capsule_cert.pem \\ 1 --server-key /root/ capsule_cert/capsule_cert_key.pem \\ 2 --server-ca-cert /root/ capsule_cert/ca_cert_bundle.pem 3",
"output omitted satellite-installer --scenario capsule --certs-tar-file \"/root/ capsule.example.com -certs.tar\" --foreman-proxy-register-in-foreman \"true\" --foreman-proxy-foreman-base-url \"https:// satellite.example.com \" --foreman-proxy-trusted-hosts \" satellite.example.com \" --foreman-proxy-trusted-hosts \" capsule.example.com \" --foreman-proxy-oauth-consumer-key \" My_OAuth_Consumer_Key \" --foreman-proxy-oauth-consumer-secret \" My_OAuth_Consumer_Secret \"",
"scp ~/ capsule.example.com -certs.tar root@ capsule.example.com :/root/ capsule.example.com -certs.tar",
"dnf install http:// capsule.example.com /pub/katello-ca-consumer-latest.noarch.rpm"
] | https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/installing_capsule_server/installing-capsule-server |
Chapter 42. Json Deserialize Action | Chapter 42. Json Deserialize Action Deserialize payload to JSON 42.1. Configuration Options The json-deserialize-action Kamelet does not specify any configuration option. 42.2. Dependencies At runtime, the json-deserialize-action Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:core camel:jackson 42.3. Usage This section describes how you can use the json-deserialize-action . 42.3.1. Knative Action You can use the json-deserialize-action Kamelet as an intermediate step in a Knative binding. json-deserialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 42.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 42.3.1.2. Procedure for using the cluster CLI Save the json-deserialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f json-deserialize-action-binding.yaml 42.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step json-deserialize-action channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 42.3.2. Kafka Action You can use the json-deserialize-action Kamelet as an intermediate step in a Kafka binding. json-deserialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 42.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 42.3.2.2. Procedure for using the cluster CLI Save the json-deserialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f json-deserialize-action-binding.yaml 42.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step json-deserialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 42.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/json-deserialize-action.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f json-deserialize-action-binding.yaml",
"kamel bind timer-source?message=Hello --step json-deserialize-action channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-deserialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-deserialize-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f json-deserialize-action-binding.yaml",
"kamel bind timer-source?message=Hello --step json-deserialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/json-deserialize-action |
Chapter 1. Overview | Chapter 1. Overview As part of an {PlatformName} subscription, customers have access to a supported version of {HubName}. Red Hat provides a published product life cycle for {HubName} so that customers and partners can properly plan, deploy, support, and maintain their {PrivateHubName}s that they use with the {PlatformNameShort}. This life cycle encompasses stated time periods for each version of {HubName}, starting with 4.2. The life cycle for each version of {HubName} is split into production phases, each identifying the various levels of maintenance over a period of time from the initial release date. While multiple versions of {HubName} will be supported at any one time, note that this life cycle applies to each specific version of {HubName} (4.2, 4.3 and so on). Customers are expected to upgrade their {HubName} environments to the most current supported version of the product in a timely fashion. Bug fixes and feature-work will target only the latest versions of the product, though some allowance may be given for high security risk items. Glossary Maintenance - Security and Application Bug fixes. Updates - Application Feature Enhancements Private Automation Hub - Refers to the customer installable {HubName} as provided via Subscription Manager. | null | https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/private_automation_hub_life_cycle/overview |
Chapter 11. Scoping tokens | Chapter 11. Scoping tokens 11.1. About scoping tokens You can create scoped tokens to delegate some of your permissions to another user or service account. For example, a project administrator might want to delegate the power to create pods. A scoped token is a token that identifies as a given user but is limited to certain actions by its scope. Only a user with the dedicated-admin role can create scoped tokens. Scopes are evaluated by converting the set of scopes for a token into a set of PolicyRules . Then, the request is matched against those rules. The request attributes must match at least one of the scope rules to be passed to the "normal" authorizer for further authorization checks. 11.1.1. User scopes User scopes are focused on getting information about a given user. They are intent-based, so the rules are automatically created for you: user:full - Allows full read/write access to the API with all of the user's permissions. user:info - Allows read-only access to information about the user, such as name and groups. user:check-access - Allows access to self-localsubjectaccessreviews and self-subjectaccessreviews . These are the variables where you pass an empty user and groups in your request object. user:list-projects - Allows read-only access to list the projects the user has access to. 11.1.2. Role scope The role scope allows you to have the same level of access as a given role filtered by namespace. role:<cluster-role name>:<namespace or * for all> - Limits the scope to the rules specified by the cluster-role, but only in the specified namespace . Note Caveat: This prevents escalating access. Even if the role allows access to resources like secrets, rolebindings, and roles, this scope will deny access to those resources. This helps prevent unexpected escalations. Many people do not think of a role like edit as being an escalating role, but with access to a secret it is. role:<cluster-role name>:<namespace or * for all>:! - This is similar to the example above, except that including the bang causes this scope to allow escalating access. 11.2. Adding unauthenticated groups to cluster roles As a cluster administrator, you can add unauthenticated users to the following cluster roles in OpenShift Dedicated by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary. You can add unauthenticated users to the following cluster roles: system:scope-impersonation system:webhook system:oauth-token-deleter self-access-reviewer Important Always verify compliance with your organization's security standards when modifying unauthenticated access. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated Apply the configuration by running the following command: USD oc apply -f add-<cluster_role>.yaml | [
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated",
"oc apply -f add-<cluster_role>.yaml"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/authentication_and_authorization/tokens-scoping |
20.9. Block I/O tuning | 20.9. Block I/O tuning <domain> ... <blkiotune> <weight>800</weight> <device> <path>/dev/sda</path> <weight>1000</weight> </device> <device> <path>/dev/sdb</path> <weight>500</weight> </device> </blkiotune> ... </domain> Figure 20.11. Block I/O Tuning Although all are optional, the components of this section of the domain XML are as follows: Table 20.8. Block I/O tuning elements Element Description <blkiotune> This optional element provides the ability to tune Blkio cgroup tunable parameters for the domain. If this is omitted, it defaults to the OS provided defaults. <weight> This optional weight element is the overall I/O weight of the guest virtual machine. The value should be within the range 100 - 1000. <device> The domain may have multiple <device> elements that further tune the weights for each host physical machine block device in use by the domain. Note that multiple guest virtual machine disks can share a single host physical machine block device. In addition, as they are backed by files within the same host physical machine file system, this tuning parameter is at the global domain level, rather than being associated with each guest virtual machine disk device (contrast this to the <iotune> element which can be applied to a single <disk> ). Each device element has two mandatory sub-elements, <path> describing the absolute path of the device, and <weight> giving the relative weight of that device, which has an acceptable range of 100 - 1000. | [
"<domain> <blkiotune> <weight>800</weight> <device> <path>/dev/sda</path> <weight>1000</weight> </device> <device> <path>/dev/sdb</path> <weight>500</weight> </device> </blkiotune> </domain>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-libvirt-dom-xml-blk-io-tuning |
Chapter 8. Configuring a deployment | Chapter 8. Configuring a deployment Configure and manage an AMQ Streams deployment to your precise needs using AMQ Streams custom resources. AMQ Streams provides example custom resources with each release, allowing you to configure and create instances of supported Kafka components. Fine-tune your deployment by configuring custom resources to include additional features according to your specific requirements. For specific areas of configuration, namely metrics, logging, and external configuration for Kafka Connect connectors, you can also use ConfigMap resources. By using a ConfigMap resource to incorporate configuration, you centralize maintenance. You can also use configuration providers to load configuration from external sources, which we recommend for supplying the credentials for Kafka Connect connector configuration. Use custom resources to configure and create instances of the following components: Kafka clusters Kafka Connect clusters Kafka MirrorMaker Kafka Bridge Cruise Control You can also use custom resource configuration to manage your instances or modify your deployment to introduce additional features. This might include configuration that supports the following: (Preview) Specifying node pools Securing client access to Kafka brokers Accessing Kafka brokers from outside the cluster Creating topics Creating users (clients) Controlling feature gates Changing logging frequency Allocating resource limits and requests Introducing features, such as AMQ Streams Drain Cleaner, Cruise Control, or distributed tracing. The AMQ Streams Custom Resource API Reference describes the properties you can use in your configuration. Note Labels applied to a custom resource are also applied to the OpenShift resources making up its cluster. This provides a convenient mechanism for resources to be labeled as required. Applying changes to a custom resource configuration file You add configuration to a custom resource using spec properties. After adding the configuration, you can use oc to apply the changes to a custom resource configuration file: oc apply -f <kafka_configuration_file> 8.1. Using example configuration files Further enhance your deployment by incorporating additional supported configuration. Example configuration files are provided with the downloadable release artifacts from the AMQ Streams software downloads page . The example files include only the essential properties and values for custom resources by default. You can download and apply the examples using the oc command-line tool. The examples can serve as a starting point when building your own Kafka component configuration for deployment. Note If you installed AMQ Streams using the Operator, you can still download the example files and use them to upload configuration. The release artifacts include an examples directory that contains the configuration examples. Example configuration files provided with AMQ Streams 1 KafkaUser custom resource configuration, which is managed by the User Operator. 2 KafkaTopic custom resource configuration, which is managed by Topic Operator. 3 Authentication and authorization configuration for Kafka components. Includes example configuration for TLS and SCRAM-SHA-512 authentication. The Red Hat Single Sign-On example includes Kafka custom resource configuration and a Red Hat Single Sign-On realm specification. You can use the example to try Red Hat Single Sign-On authorization services. There is also an example with enabled oauth authentication and keycloak authorization metrics. 4 Kafka custom resource configuration for a deployment of Mirror Maker. Includes example configuration for replication policy and synchronization frequency. 5 Metrics configuration , including Prometheus installation and Grafana dashboard files. 6 Kafka custom resource configuration for a deployment of Kafka. Includes example configuration for an ephemeral or persistent single or multi-node deployment. 7 (Preview) KafkaNodePool configuration for Kafka nodes in a Kafka cluster. Includes example configuration for nodes in clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper. 8 Kafka custom resource with a deployment configuration for Cruise Control. Includes KafkaRebalance custom resources to generate optimization proposals from Cruise Control, with example configurations to use the default or user optimization goals. 9 KafkaConnect and KafkaConnector custom resource configuration for a deployment of Kafka Connect. Includes example configurations for a single or multi-node deployment. 10 KafkaBridge custom resource configuration for a deployment of Kafka Bridge. 8.2. Configuring Kafka Update the spec properties of the Kafka custom resource to configure your Kafka deployment. As well as configuring Kafka, you can add configuration for ZooKeeper and the AMQ Streams Operators. Common configuration properties, such as logging and healthchecks, are configured independently for each component. Configuration options that are particularly important include the following: Resource requests (CPU / Memory) JVM options for maximum and minimum memory allocation Listeners for connecting clients to Kafka brokers (and authentication of clients) Authentication Storage Rack awareness Metrics Cruise Control for cluster rebalancing For a deeper understanding of the Kafka cluster configuration options, refer to the AMQ Streams Custom Resource API Reference . Kafka versions The inter.broker.protocol.version property for the Kafka config must be the version supported by the specified Kafka version ( spec.kafka.version ). The property represents the version of Kafka protocol used in a Kafka cluster. From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn't need to be set. An update to the inter.broker.protocol.version is required when upgrading your Kafka version. For more information, see Upgrading Kafka . Managing TLS certificates When deploying Kafka, the Cluster Operator automatically sets up and renews TLS certificates to enable encryption and authentication within your cluster. If required, you can manually renew the cluster and clients CA certificates before their renewal period starts. You can also replace the keys used by the cluster and clients CA certificates. For more information, see Renewing CA certificates manually and Replacing private keys . Example Kafka custom resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 1 version: 3.5.0 2 logging: 3 type: inline loggers: kafka.root.logger.level: INFO resources: 4 requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" readinessProbe: 5 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 6 -Xms: 8192m -Xmx: 8192m image: my-org/my-image:latest 7 listeners: 8 - name: plain 9 port: 9092 10 type: internal 11 tls: false 12 configuration: useServiceDnsDomain: true 13 - name: tls port: 9093 type: internal tls: true authentication: 14 type: tls - name: external 15 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 16 secretName: my-secret certificate: my-certificate.crt key: my-key.key authorization: 17 type: simple config: 18 auto.create.topics.enable: "false" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: "3.5" storage: 19 type: persistent-claim 20 size: 10000Gi rack: 21 topologyKey: topology.kubernetes.io/zone metricsConfig: 22 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 23 name: my-config-map key: my-key # ... zookeeper: 24 replicas: 3 25 logging: 26 type: inline loggers: zookeeper.root.logger: INFO resources: requests: memory: 8Gi cpu: "2" limits: memory: 8Gi cpu: "2" jvmOptions: -Xms: 4096m -Xmx: 4096m storage: type: persistent-claim size: 1000Gi metricsConfig: # ... entityOperator: 27 tlsSidecar: 28 resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 29 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 30 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" kafkaExporter: 31 # ... cruiseControl: 32 # ... 1 The number of replica nodes. 2 Kafka version, which can be changed to a supported version by following the upgrade procedure. 3 Kafka loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties key in the ConfigMap. For the Kafka kafka.root.logger.level logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 4 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 5 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 6 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka. 7 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 8 Listeners configure how clients connect to the Kafka cluster via bootstrap addresses. Listeners are configured as internal or external listeners for connection from inside or outside the OpenShift cluster. 9 Name to identify the listener. Must be unique within the Kafka cluster. 10 Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. 11 Listener type specified as internal or cluster-ip (to expose Kafka using per-broker ClusterIP services), or for external listeners, as route (OpenShift only), loadbalancer , nodeport or ingress (Kubernetes only). 12 Enables TLS encryption for each listener. Default is false . TLS encryption has to be enabled, by setting it to true , for route and ingress type listeners. 13 Defines whether the fully-qualified DNS names including the cluster service suffix (usually .cluster.local ) are assigned. 14 Listener authentication mechanism specified as mTLS, SCRAM-SHA-512, or token-based OAuth 2.0. 15 External listener configuration specifies how the Kafka cluster is exposed outside OpenShift, such as through a route , loadbalancer or nodeport . 16 Optional configuration for a Kafka listener certificate managed by an external CA (certificate authority). The brokerCertChainAndKey specifies a Secret that contains a server certificate and a private key. You can configure Kafka listener certificates on any listener with enabled TLS encryption. 17 Authorization enables simple, OAUTH 2.0, or OPA authorization on the Kafka broker. Simple authorization uses the AclAuthorizer Kafka plugin. 18 Broker configuration. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams. 19 Storage size for persistent volumes may be increased and additional volumes may be added to JBOD storage. 20 Persistent storage has additional configuration options, such as a storage id and class for dynamic volume provisioning. 21 Rack awareness configuration to spread replicas across different racks, data centers, or availability zones. The topologyKey must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standard topology.kubernetes.io/zone label. 22 Prometheus metrics enabled. In this example, metrics are configured for the Prometheus JMX Exporter (the default metrics exporter). 23 Rules for exporting metrics in Prometheus format to a Grafana dashboard through the Prometheus JMX Exporter, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . 24 ZooKeeper-specific configuration, which contains properties similar to the Kafka configuration. 25 The number of ZooKeeper nodes. ZooKeeper clusters or ensembles usually run with an odd number of nodes, typically three, five, or seven. The majority of nodes must be available in order to maintain an effective quorum. If the ZooKeeper cluster loses its quorum, it will stop responding to clients and the Kafka brokers will stop working. Having a stable and highly available ZooKeeper cluster is crucial for AMQ Streams. 26 ZooKeeper loggers and log levels. 27 Entity Operator configuration, which specifies the configuration for the Topic Operator and User Operator. 28 Entity Operator TLS sidecar configuration. Entity Operator uses the TLS sidecar for secure communication with ZooKeeper. 29 Specified Topic Operator loggers and log levels. This example uses inline logging. 30 Specified User Operator loggers and log levels. 31 Kafka Exporter configuration. Kafka Exporter is an optional component for extracting metrics data from Kafka brokers, in particular consumer lag data. For Kafka Exporter to be able to work properly, consumer groups need to be in use. 32 Optional configuration for Cruise Control, which is used to rebalance the Kafka cluster. 8.2.1. Setting limits on brokers using the Kafka Static Quota plugin Use the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. You enable the plugin and set limits by configuring the Kafka resource. You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers. You can set byte-rate thresholds for producer and consumer bandwidth. The total limit is distributed across all clients accessing the broker. For example, you can set a byte-rate threshold of 40 MBps for producers. If two producers are running, they are each limited to a throughput of 20 MBps. Storage quotas throttle Kafka disk storage limits between a soft limit and hard limit. The limits apply to all available disk space. Producers are slowed gradually between the soft and hard limit. The limits prevent disks filling up too quickly and exceeding their capacity. Full disks can lead to issues that are hard to rectify. The hard limit is the maximum storage limit. Note For JBOD storage, the limit applies across all disks. If a broker is using two 1 TB disks and the quota is 1.1 TB, one disk might fill and the other disk will be almost empty. Prerequisites The Cluster Operator that manages the Kafka cluster is running. Procedure Add the plugin properties to the config of the Kafka resource. The plugin properties are shown in this example configuration. Example Kafka Static Quota plugin configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6 1 Loads the Kafka Static Quota plugin. 2 Sets the producer byte-rate threshold. 1 MBps in this example. 3 Sets the consumer byte-rate threshold. 1 MBps in this example. 4 Sets the lower soft limit for storage. 400 GB in this example. 5 Sets the higher hard limit for storage. 500 GB in this example. 6 Sets the interval in seconds between checks on storage. 5 seconds in this example. You can set this to 0 to disable the check. Update the resource. oc apply -f <kafka_configuration_file> Additional resources KafkaUserQuotas schema reference 8.2.2. Default ZooKeeper configuration values When deploying ZooKeeper with AMQ Streams, some of the default configuration set by AMQ Streams differs from the standard ZooKeeper defaults. This is because AMQ Streams sets a number of ZooKeeper properties with values that are optimized for running ZooKeeper within an OpenShift environment. The default configuration for key ZooKeeper properties in AMQ Streams is as follows: Table 8.1. Default ZooKeeper Properties in AMQ Streams Property Default value Description tickTime 2000 The length of a single tick in milliseconds, which determines the length of a session timeout. initLimit 5 The maximum number of ticks that a follower is allowed to fall behind the leader in a ZooKeeper cluster. syncLimit 2 The maximum number of ticks that a follower is allowed to be out of sync with the leader in a ZooKeeper cluster. autopurge.purgeInterval 1 Enables the autopurge feature and sets the time interval in hours for purging the server-side ZooKeeper transaction log. admin.enableServer false Flag to disable the ZooKeeper admin server. The admin server is not used by AMQ Streams. Important Modifying these default values as zookeeper.config in the Kafka custom resource may impact the behavior and performance of your ZooKeeper cluster. 8.3. (Preview) Configuring node pools Update the spec properties of the KafkaNodePool custom resource to configure a node pool deployment. Note The node pools feature is available as a preview. Node pools are not enabled by default, so you must enable the KafkaNodePools feature gate before using them. A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. Each pool has its own unique configuration, which includes mandatory settings for the number of replicas, roles, and storage allocation. Optionally, you can also specify values for the following properties: resources to specify memory and cpu requests and limits template to specify custom configuration for pods and other OpenShift resources jvmOptions to specify custom JVM configuration for heap size, runtime and other options The Kafka resource represents the configuration for all nodes in the Kafka cluster. The KafkaNodePool resource represents the configuration for nodes only in the node pool. If a configuration property is not specified in KafkaNodePool , it is inherited from the Kafka resource. Configuration specified in the KafkaNodePool resource takes precedence if set in both resources. For example, if both the node pool and Kafka configuration includes jvmOptions , the values specified in the node pool configuration are used. When -Xmx: 1024m is set in KafkaNodePool.spec.jvmOptions and -Xms: 512m is set in Kafka.spec.kafka.jvmOptions , the node uses the value from its node pool configuration. Properties from Kafka and KafkaNodePool schemas are not combined. To clarify, if KafkaNodePool.spec.template includes only podSet.metadata.labels , and Kafka.spec.kafka.template includes podSet.metadata.annotations and pod.metadata.labels , the template values from the Kafka configuration are ignored since there is a template value in the node pool configuration. Node pools can be used with Kafka clusters that operate in KRaft mode (using Kafka Raft metadata) or use ZooKeeper for cluster management. If you are using KRaft mode, you can specify roles for all nodes in the node pool to operate as brokers, controllers, or both. If you are using ZooKeeper, nodes must be set as brokers only. Important KRaft mode is not ready for production in Apache Kafka or in AMQ Streams. For a deeper understanding of the node pool configuration options, refer to the AMQ Streams Custom Resource API Reference . Note While the KafkaNodePools feature gate that enables node pools is in alpha phase, replica and storage configuration properties in the KafkaNodePool resource must also be present in the Kafka resource. The configuration in the Kafka resource is ignored when node pools are used. Similarly, ZooKeeper configuration properties must also be present in the Kafka resource when using KRaft mode. These properties are also ignored. Example configuration for a node pool in a cluster using ZooKeeper apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a 1 labels: strimzi.io/cluster: my-cluster 2 spec: replicas: 3 3 roles: - broker 4 storage: 5 type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: 6 requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" 1 Unique name for the node pool. 2 The Kafka cluster the node pool belongs to. A node pool can only belong to a single cluster. 3 Number of replicas for the nodes. 4 Roles for the nodes in the node pool, which can only be broker when using Kafka with ZooKeeper. 5 Storage specification for the nodes. 6 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. Example configuration for a node pool in a cluster using KRaft mode apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kraft-dual-role labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: 1 - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false resources: requests: memory: 64Gi cpu: "8" limits: memory: 64Gi cpu: "12" 1 Roles for the nodes in the node pool. In this example, the nodes have dual roles as controllers and brokers. Note The configuration for the Kafka resource must be suitable for KRaft mode. Currently, KRaft mode has a number of limitations . 8.3.1. (Preview) Assigning IDs to node pools for scaling operations This procedure describes how to use annotations for advanced node ID handling by the Cluster Operator when performing scaling operations on node pools. You specify the node IDs to use, rather than the Cluster Operator using the ID in sequence. Management of node IDs in this way gives greater control. To add a range of IDs, you assign the following annotations to the KafkaNodePool resource: strimzi.io/-node-ids to add a range of IDs that are used for new brokers strimzi.io/remove-node-ids to add a range of IDs for removing existing brokers You can specify an array of individual node IDs, ID ranges, or a combination of both. For example, you can specify the following range of IDs: [0, 1, 2, 10-20, 30] for scaling up the Kafka node pool. This format allows you to specify a combination of individual node IDs ( 0 , 1 , 2 , 30 ) as well as a range of IDs ( 10-20 ). In a typical scenario, you might specify a range of IDs for scaling up and a single node ID to remove a specific node when scaling down. In this procedure, we add the scaling annotations to node pools as follows: pool-a is assigned a range of IDs for scaling up pool-b is assigned a range of IDs for scaling down During the scaling operation, IDs are used as follows: Scale up picks up the lowest available ID in the range for the new node. Scale down removes the node with the highest available ID in the range. If there are gaps in the sequence of node IDs assigned in the node pool, the node to be added is assigned an ID that fills the gap. The annotations don't need to be updated after every scaling operation. Any unused IDs are still valid for the scaling event. The Cluster Operator allows you to specify a range of IDs in either ascending or descending order, so you can define them in the order the nodes are scaled. For example, when scaling up, you can specify a range such as [1000-1999] , and the new nodes are assigned the lowest IDs: 1000 , 1001 , 1002 , 1003 , and so on. Conversely, when scaling down, you can specify a range like [1999-1000] , ensuring that nodes with the highest IDs are removed: 1003 , 1002 , 1001 , 1000 , and so on. If you don't specify an ID range using the annotations, the Cluster Operator follows its default behavior for handling IDs during scaling operations. Node IDs start at 0 (zero) and run sequentially across the Kafka cluster. The lowest ID is assigned to a new node. Gaps to node IDs are filled across the cluster. This means that they might not run sequentially within a node pool. The default behavior for scaling up is to add the lowest available node ID across the cluster; and for scaling down, it is to remove the node in the node pool with the highest available node ID. The default approach is also applied if the assigned range of IDs is misformatted, the scaling up range runs out of IDs, or the scaling down range does not apply to any in-use nodes. Prerequisites The Cluster Operator must be deployed. Procedure Annotate the node pool with the IDs to use when scaling up or scaling down, as shown in the following examples. IDs for scaling up are assigned to node pool pool-a : Assigning IDs for scaling up oc annotate kafkanodepool pool-a strimzi.io/-node-ids="[0,1,2,10-20,30]" The lowest available ID from this range is used when adding a node to pool-a . IDs for scaling down are assigned to node pool pool-b : Assigning IDs for scaling down oc annotate kafkanodepool pool-b strimzi.io/remove-node-ids="[60-50,9,8,7]" The highest available ID from this range is removed when scaling down pool-b . You can now scale the node pool. For more information, see the following: Section 8.3.2, "(Preview) Adding nodes to a node pool" Section 8.3.3, "(Preview) Removing nodes from a node pool" Section 8.3.4, "(Preview) Moving nodes between node pools" On reconciliation, a warning is given if the annotations are misformatted. 8.3.2. (Preview) Adding nodes to a node pool This procedure describes how to scale up a node pool to add new nodes. In this procedure, we start with three nodes for node pool pool-a : Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-2 1/1 Running 0 Node IDs are appended to the name of the node on creation. We add node my-cluster-pool-a-kafka-3 , which has a node ID of 3 . Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. (Optional) For scale up operations, you can specify the range of node IDs to use . If you have assigned a range of node IDs for the operation, the ID of the node being added is determined by the sequence of nodes given. Otherwise, the lowest available node ID across the cluster is used. Procedure Create a new node in the node pool. For example, node pool pool-a has three replicas. We add a node by increasing the number of replicas: oc scale kafkanodepool pool-a --replicas=4 Check the status of the deployment and wait for the pods in the node pool to be created and have a status of READY . oc get pods -n <my_cluster_operator_namespace> Output shows four Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-2 1/1 Running 0 my-cluster-pool-a-kafka-3 1/1 Running 0 Reassign the partitions after increasing the number of nodes in the node pool. After scaling up a node pool, you can use the Cruise Control add-brokers mode to move partition replicas from existing brokers to the newly added brokers. 8.3.3. (Preview) Removing nodes from a node pool This procedure describes how to scale down a node pool to remove nodes. In this procedure, we start with four nodes for node pool pool-a : Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-2 1/1 Running 0 my-cluster-pool-a-kafka-3 1/1 Running 0 Node IDs are appended to the name of the node on creation. We remove node my-cluster-pool-a-kafka-3 , which has a node ID of 3 . Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. (Optional) For scale down operations, you can specify the range of node IDs to use in the operation . If you have assigned a range of node IDs for the operation, the ID of the node being removed is determined by the sequence of nodes given. Otherwise, the node with the highest available ID in the node pool is removed. Procedure Reassign the partitions before decreasing the number of nodes in the node pool. Before scaling down a node pool, you can use the Cruise Control remove-brokers mode to move partition replicas off the brokers that are going to be removed. After the reassignment process is complete, and the node being removed has no live partitions, reduce the number of Kafka nodes in the node pool. For example, node pool pool-a has four replicas. We remove a node by decreasing the number of replicas: oc scale kafkanodepool pool-a --replicas=3 Output shows three Kafka nodes in the node pool NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-0 1/1 Running 0 my-cluster-pool-b-kafka-1 1/1 Running 0 my-cluster-pool-b-kafka-2 1/1 Running 0 8.3.4. (Preview) Moving nodes between node pools This procedure describes how to move nodes between source and target Kafka node pools without downtime. You create a new node on the target node pool and reassign partitions to move data from the old node on the source node pool. When the replicas on the new node are in-sync, you can delete the old node. In this procedure, we start with two node pools: pool-a with three replicas is the target node pool pool-b with four replicas is the source node pool We scale up pool-a , and reassign partitions and scale down pool-b , which results in the following: pool-a with four replicas pool-b with three replicas Note During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. Prerequisites The Cluster Operator must be deployed. (Optional) For scale up and scale down operations, you can specify the range of node IDs to use . If you have assigned node IDs for the operation, the ID of the node being added or removed is determined by the sequence of nodes given. Otherwise, the lowest available node ID across the cluster is used when adding nodes; and the node with the highest available ID in the node pool is removed. Procedure Create a new node in the target node pool. For example, node pool pool-a has three replicas. We add a node by increasing the number of replicas: oc scale kafkanodepool pool-a --replicas=4 Check the status of the deployment and wait for the pods in the node pool to be created and have a status of READY . oc get pods -n <my_cluster_operator_namespace> Output shows four Kafka nodes in the target node pool NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-4 1/1 Running 0 my-cluster-pool-a-kafka-5 1/1 Running 0 Node IDs are appended to the name of the node on creation. We add node my-cluster-pool-a-kafka-5 , which has a node ID of 5 . Reassign the partitions from the old node to the new node. Before scaling down the source node pool, you can use the Cruise Control remove-brokers mode to move partition replicas off the brokers that are going to be removed. After the reassignment process is complete, reduce the number of Kafka nodes in the source node pool. For example, node pool pool-b has four replicas. We remove a node by decreasing the number of replicas: oc scale kafkanodepool pool-b --replicas=3 The node with the highest ID within a pool is removed. Output shows three Kafka nodes in the source node pool NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-2 1/1 Running 0 my-cluster-pool-b-kafka-3 1/1 Running 0 my-cluster-pool-b-kafka-6 1/1 Running 0 8.3.5. (Preview) Migrating existing Kafka clusters to use Kafka node pools This procedure describes how to migrate existing Kafka clusters to use Kafka node pools. After you have updated the Kafka cluster, you can use the node pools to manage the configuration of nodes within each pool. Note While the KafkaNodePools feature gate that enables node pools is in alpha phase, replica and storage configuration in the KafkaNodePool resource must also be present in the Kafka resource. The configuration is ignored when node pools are being used. Prerequisites The Cluster Operator must be deployed. Procedure Create a new KafkaNodePool resource. Name the resource kafka . Point a strimzi.io/cluster label to your existing Kafka resource. Set the replica count and storage configuration to match your current Kafka cluster. Set the roles to broker . Example configuration for a node pool used in migrating a Kafka cluster apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kafka labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false Apply the KafkaNodePool resource: oc apply -f <node_pool_configuration_file> By applying this resource, you switch Kafka to using node pools. There is no change or rolling update and resources are identical to how they were before. Update the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration to include +KafkaNodePools . env: - name: STRIMZI_FEATURE_GATES value: +KafkaNodePools Enable the KafkaNodePools feature gate in the Kafka resource using the strimzi.io/node-pools: enabled annotation. Example configuration for a node pool in a cluster using ZooKeeper apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled spec: kafka: version: 3.5.0 replicas: 3 # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false Apply the Kafka resource: oc apply -f <kafka_configuration_file> 8.4. Configuring the Entity Operator Use the entityOperator property in Kafka.spec to configure the Entity Operator. The Entity Operator is responsible for managing Kafka-related entities in a running Kafka cluster. It comprises the following operators: Topic Operator to manage Kafka topics User Operator to manage Kafka users By configuring the Kafka resource, the Cluster Operator can deploy the Entity Operator, including one or both operators. Once deployed, the operators are automatically configured to handle the topics and users of the Kafka cluster. Each operator can only monitor a single namespace. For more information, see Section 1.2.1, "Watching AMQ Streams resources in OpenShift namespaces" . The entityOperator property supports several sub-properties: tlsSidecar topicOperator userOperator template The tlsSidecar property contains the configuration of the TLS sidecar container, which is used to communicate with ZooKeeper. The template property contains the configuration of the Entity Operator pod, such as labels, annotations, affinity, and tolerations. For more information on configuring templates, see Section 8.16, "Customizing OpenShift resources" . The topicOperator property contains the configuration of the Topic Operator. When this option is missing, the Entity Operator is deployed without the Topic Operator. The userOperator property contains the configuration of the User Operator. When this option is missing, the Entity Operator is deployed without the User Operator. For more information on the properties used to configure the Entity Operator, see the EntityUserOperatorSpec schema reference . Example of basic configuration enabling both operators apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {} If an empty object ( {} ) is used for the topicOperator and userOperator , all properties use their default values. When both topicOperator and userOperator properties are missing, the Entity Operator is not deployed. 8.4.1. Configuring the Topic Operator Use topicOperator properties in Kafka.spec.entityOperator to configure the Topic Operator. Note If you are using the preview of unidirectional topic management, the following properties are not used and will be ignored: Kafka.spec.entityOperator.topicOperator.zookeeperSessionTimeoutSeconds and Kafka.spec.entityOperator.topicOperator.topicMetadataMaxAttempts . For more information on unidirectional topic management, refer to Section 9.1, "Topic management modes" . The following properties are supported: watchedNamespace The OpenShift namespace in which the Topic Operator watches for KafkaTopic resources. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds The interval between periodic reconciliations in seconds. Default 120 . zookeeperSessionTimeoutSeconds The ZooKeeper session timeout in seconds. Default 18 . topicMetadataMaxAttempts The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential back-off. Consider increasing this value when topic creation might take more time due to the number of partitions or replicas. Default 6 . image The image property can be used to configure the container image which will be used. To learn more, refer to the information provided on configuring the image property` . resources The resources property configures the amount of resources allocated to the Topic Operator. You can specify requests and limits for memory and cpu resources. The requests should be enough to ensure a stable performance of the operator. logging The logging property configures the logging of the Topic Operator. To learn more, refer to the information provided on Topic Operator logging . Example Topic Operator configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi # ... 8.4.2. Configuring the User Operator Use userOperator properties in Kafka.spec.entityOperator to configure the User Operator. The following properties are supported: watchedNamespace The OpenShift namespace in which the User Operator watches for KafkaUser resources. Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds The interval between periodic reconciliations in seconds. Default 120 . image The image property can be used to configure the container image which will be used. To learn more, refer to the information provided on configuring the image property` . resources The resources property configures the amount of resources allocated to the User Operator. You can specify requests and limits for memory and cpu resources. The requests should be enough to ensure a stable performance of the operator. logging The logging property configures the logging of the User Operator. To learn more, refer to the information provided on User Operator logging . secretPrefix The secretPrefix property adds a prefix to the name of all Secrets created from the KafkaUser resource. For example, secretPrefix: kafka- would prefix all Secret names with kafka- . So a KafkaUser named my-user would create a Secret named kafka-my-user . Example User Operator configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: "1" memory: 500Mi limits: cpu: "1" memory: 500Mi # ... 8.5. Configuring the Cluster Operator Use environment variables to configure the Cluster Operator. Specify the environment variables for the container image of the Cluster Operator in its Deployment configuration file. Note The Deployment configuration file provided with the AMQ Streams release artifacts is install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml . You can use the following environment variables to configure the Cluster Operator. If you are running Cluster Operator replicas in standby mode, there are additional environment variables for enabling leader election . STRIMZI_NAMESPACE A comma-separated list of namespaces that the operator operates in. When not set, set to empty string, or set to * , the Cluster Operator operates in all namespaces. The Cluster Operator deployment might use the downward API to set this automatically to the namespace the Cluster Operator is deployed in. Example configuration for Cluster Operator namespaces env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_FULL_RECONCILIATION_INTERVAL_MS Optional, default is 120000 ms. The interval between periodic reconciliations , in milliseconds. STRIMZI_OPERATION_TIMEOUT_MS Optional, default 300000 ms. The timeout for internal operations, in milliseconds. Increase this value when using AMQ Streams on clusters where regular OpenShift operations take longer than usual (because of slow downloading of Docker images, for example). STRIMZI_ZOOKEEPER_ADMIN_SESSION_TIMEOUT_MS Optional, default 10000 ms. The session timeout for the Cluster Operator's ZooKeeper admin client, in milliseconds. Increase the value if ZooKeeper requests from the Cluster Operator are regularly failing due to timeout issues. There is a maximum allowed session time set on the ZooKeeper server side via the maxSessionTimeout config. By default, the maximum session timeout value is 20 times the default tickTime (whose default is 2000) at 40000 ms. If you require a higher timeout, change the maxSessionTimeout ZooKeeper server configuration value. STRIMZI_OPERATIONS_THREAD_POOL_SIZE Optional, default 10. The worker thread pool size, which is used for various asynchronous and blocking operations that are run by the Cluster Operator. STRIMZI_OPERATOR_NAME Optional, defaults to the pod's hostname. The operator name identifies the AMQ Streams instance when emitting OpenShift events . STRIMZI_OPERATOR_NAMESPACE The name of the namespace where the Cluster Operator is running. Do not configure this variable manually. Use the downward API. env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_OPERATOR_NAMESPACE_LABELS Optional. The labels of the namespace where the AMQ Streams Cluster Operator is running. Use namespace labels to configure the namespace selector in network policies . Network policies allow the AMQ Streams Cluster Operator access only to the operands from the namespace with these labels. When not set, the namespace selector in network policies is configured to allow access to the Cluster Operator from any namespace in the OpenShift cluster. env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 STRIMZI_LABELS_EXCLUSION_PATTERN Optional, default regex pattern is ^app.kubernetes.io/(?!part-of).* . The regex exclusion pattern used to filter labels propagation from the main custom resource to its subresources. The labels exclusion filter is not applied to labels in template sections such as spec.kafka.template.pod.metadata.labels . env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: "^key1.*" STRIMZI_CUSTOM_{COMPONENT_NAME}_LABELS Optional. One or more custom labels to apply to all the pods created by the {COMPONENT_NAME} custom resource. The Cluster Operator labels the pods when the custom resource is created or is reconciled. Labels can be applied to the following components: KAFKA KAFKA_CONNECT KAFKA_CONNECT_BUILD ZOOKEEPER ENTITY_OPERATOR KAFKA_MIRROR_MAKER2 KAFKA_MIRROR_MAKER CRUISE_CONTROL KAFKA_BRIDGE KAFKA_EXPORTER STRIMZI_CUSTOM_RESOURCE_SELECTOR Optional. The label selector to filter the custom resources handled by the Cluster Operator. The operator will operate only on those custom resources that have the specified labels set. Resources without these labels will not be seen by the operator. The label selector applies to Kafka , KafkaConnect , KafkaBridge , KafkaMirrorMaker , and KafkaMirrorMaker2 resources. KafkaRebalance and KafkaConnector resources are operated only when their corresponding Kafka and Kafka Connect clusters have the matching labels. env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2 STRIMZI_KAFKA_IMAGES Required. The mapping from the Kafka version to the corresponding Docker image containing a Kafka broker for that version. The required syntax is whitespace or comma-separated <version> = <image> pairs. For example 3.4.0=registry.redhat.io/amq-streams/kafka-34-rhel8:2.5.2, 3.5.0=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 . This is used when a Kafka.spec.kafka.version property is specified but not the Kafka.spec.kafka.image in the Kafka resource. STRIMZI_DEFAULT_KAFKA_INIT_IMAGE Optional, default registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.2 . The image name to use as default for the init container if no image is specified as the kafka-init-image in the Kafka resource. The init container is started before the broker for initial configuration work, such as rack support. STRIMZI_KAFKA_CONNECT_IMAGES Required. The mapping from the Kafka version to the corresponding Docker image of Kafka Connect for that version. The required syntax is whitespace or comma-separated <version> = <image> pairs. For example 3.4.0=registry.redhat.io/amq-streams/kafka-34-rhel8:2.5.2, 3.5.0=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 . This is used when a KafkaConnect.spec.version property is specified but not the KafkaConnect.spec.image . STRIMZI_KAFKA_MIRROR_MAKER_IMAGES Required. The mapping from the Kafka version to the corresponding Docker image of MirrorMaker for that version. The required syntax is whitespace or comma-separated <version> = <image> pairs. For example 3.4.0=registry.redhat.io/amq-streams/kafka-34-rhel8:2.5.2, 3.5.0=registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 . This is used when a KafkaMirrorMaker.spec.version property is specified but not the KafkaMirrorMaker.spec.image . STRIMZI_DEFAULT_TOPIC_OPERATOR_IMAGE Optional, default registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.2 . The image name to use as the default when deploying the Topic Operator if no image is specified as the Kafka.spec.entityOperator.topicOperator.image in the Kafka resource. STRIMZI_DEFAULT_USER_OPERATOR_IMAGE Optional, default registry.redhat.io/amq-streams/strimzi-rhel8-operator:2.5.2 . The image name to use as the default when deploying the User Operator if no image is specified as the Kafka.spec.entityOperator.userOperator.image in the Kafka resource. STRIMZI_DEFAULT_TLS_SIDECAR_ENTITY_OPERATOR_IMAGE Optional, default registry.redhat.io/amq-streams/kafka-35-rhel8:2.5.2 . The image name to use as the default when deploying the sidecar container for the Entity Operator if no image is specified as the Kafka.spec.entityOperator.tlsSidecar.image in the Kafka resource. The sidecar provides TLS support. STRIMZI_IMAGE_PULL_POLICY Optional. The ImagePullPolicy that is applied to containers in all pods managed by the Cluster Operator. The valid values are Always , IfNotPresent , and Never . If not specified, the OpenShift defaults are used. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_IMAGE_PULL_SECRETS Optional. A comma-separated list of Secret names. The secrets referenced here contain the credentials to the container registries where the container images are pulled from. The secrets are specified in the imagePullSecrets property for all pods created by the Cluster Operator. Changing this list results in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. STRIMZI_KUBERNETES_VERSION Optional. Overrides the OpenShift version information detected from the API server. Example configuration for OpenShift version override env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64 KUBERNETES_SERVICE_DNS_DOMAIN Optional. Overrides the default OpenShift DNS domain name suffix. By default, services assigned in the OpenShift cluster have a DNS domain name that uses the default suffix cluster.local . For example, for broker kafka-0 : <cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local The DNS domain name is added to the Kafka broker certificates used for hostname verification. If you are using a different DNS domain name suffix in your cluster, change the KUBERNETES_SERVICE_DNS_DOMAIN environment variable from the default to the one you are using in order to establish a connection with the Kafka brokers. STRIMZI_CONNECT_BUILD_TIMEOUT_MS Optional, default 300000 ms. The timeout for building new Kafka Connect images with additional connectors, in milliseconds. Consider increasing this value when using AMQ Streams to build container images containing many connectors or using a slow container registry. STRIMZI_NETWORK_POLICY_GENERATION Optional, default true . Network policy for resources. Network policies allow connections between Kafka components. Set this environment variable to false to disable network policy generation. You might do this, for example, if you want to use custom network policies. Custom network policies allow more control over maintaining the connections between components. STRIMZI_DNS_CACHE_TTL Optional, default 30 . Number of seconds to cache successful name lookups in local DNS resolver. Any negative value means cache forever. Zero means do not cache, which can be useful for avoiding connection errors due to long caching policies being applied. STRIMZI_POD_SET_RECONCILIATION_ONLY Optional, default false . When set to true , the Cluster Operator reconciles only the StrimziPodSet resources and any changes to the other custom resources ( Kafka , KafkaConnect , and so on) are ignored. This mode is useful for ensuring that your pods are recreated if needed, but no other changes happen to the clusters. STRIMZI_FEATURE_GATES Optional. Enables or disables the features and functionality controlled by feature gates . STRIMZI_POD_SECURITY_PROVIDER_CLASS Optional. Configuration for the pluggable PodSecurityProvider class, which can be used to provide the security context configuration for Pods and containers. 8.5.1. Restricting access to the Cluster Operator using network policy Use the STRIMZI_OPERATOR_NAMESPACE_LABELS environment variable to establish network policy for the Cluster Operator using namespace labels. The Cluster Operator can run in the same namespace as the resources it manages, or in a separate namespace. By default, the STRIMZI_OPERATOR_NAMESPACE environment variable is configured to use the downward API to find the namespace the Cluster Operator is running in. If the Cluster Operator is running in the same namespace as the resources, only local access is required and allowed by AMQ Streams. If the Cluster Operator is running in a separate namespace to the resources it manages, any namespace in the OpenShift cluster is allowed access to the Cluster Operator unless network policy is configured. By adding namespace labels, access to the Cluster Operator is restricted to the namespaces specified. Network policy configured for the Cluster Operator deployment #... env: # ... - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #... 8.5.2. Configuring periodic reconciliation by the Cluster Operator Use the STRIMZI_FULL_RECONCILIATION_INTERVAL_MS variable to set the time interval for periodic reconciliations by the Cluster Operator. Replace its value with the required interval in milliseconds. Reconciliation period configured for the Cluster Operator deployment #... env: # ... - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: "120000" #... The Cluster Operator reacts to all notifications about applicable cluster resources received from the OpenShift cluster. If the operator is not running, or if a notification is not received for any reason, resources will get out of sync with the state of the running OpenShift cluster. In order to handle failovers properly, a periodic reconciliation process is executed by the Cluster Operator so that it can compare the state of the resources with the current cluster deployments in order to have a consistent state across all of them. Additional resources Downward API 8.5.3. Running multiple Cluster Operator replicas with leader election The default Cluster Operator configuration enables leader election to run multiple parallel replicas of the Cluster Operator. One replica is elected as the active leader and operates the deployed resources. The other replicas run in standby mode. When the leader stops or fails, one of the standby replicas is elected as the new leader and starts operating the deployed resources. By default, AMQ Streams runs with a single Cluster Operator replica that is always the leader replica. When a single Cluster Operator replica stops or fails, OpenShift starts a new replica. Running the Cluster Operator with multiple replicas is not essential. But it's useful to have replicas on standby in case of large-scale disruptions caused by major failure. For example, suppose multiple worker nodes or an entire availability zone fails. This failure might cause the Cluster Operator pod and many Kafka pods to go down at the same time. If subsequent pod scheduling causes congestion through lack of resources, this can delay operations when running a single Cluster Operator. 8.5.3.1. Enabling leader election for Cluster Operator replicas Configure leader election environment variables when running additional Cluster Operator replicas. The following environment variables are supported: STRIMZI_LEADER_ELECTION_ENABLED Optional, disabled ( false ) by default. Enables or disables leader election, which allows additional Cluster Operator replicas to run on standby. Note Leader election is disabled by default. It is only enabled when applying this environment variable on installation. STRIMZI_LEADER_ELECTION_LEASE_NAME Required when leader election is enabled. The name of the OpenShift Lease resource that is used for the leader election. STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE Required when leader election is enabled. The namespace where the OpenShift Lease resource used for leader election is created. You can use the downward API to configure it to the namespace where the Cluster Operator is deployed. env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace STRIMZI_LEADER_ELECTION_IDENTITY Required when leader election is enabled. Configures the identity of a given Cluster Operator instance used during the leader election. The identity must be unique for each operator instance. You can use the downward API to configure it to the name of the pod where the Cluster Operator is deployed. env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name STRIMZI_LEADER_ELECTION_LEASE_DURATION_MS Optional, default 15000 ms. Specifies the duration the acquired lease is valid. STRIMZI_LEADER_ELECTION_RENEW_DEADLINE_MS Optional, default 10000 ms. Specifies the period the leader should try to maintain leadership. STRIMZI_LEADER_ELECTION_RETRY_PERIOD_MS Optional, default 2000 ms. Specifies the frequency of updates to the lease lock by the leader. 8.5.3.2. Configuring Cluster Operator replicas To run additional Cluster Operator replicas in standby mode, you will need to increase the number of replicas and enable leader election. To configure leader election, use the leader election environment variables. To make the required changes, configure the following Cluster Operator installation files located in install/cluster-operator/ : 060-Deployment-strimzi-cluster-operator.yaml 022-ClusterRole-strimzi-cluster-operator-role.yaml 022-RoleBinding-strimzi-cluster-operator.yaml Leader election has its own ClusterRole and RoleBinding RBAC resources that target the namespace where the Cluster Operator is running, rather than the namespace it is watching. The default deployment configuration creates a Lease resource called strimzi-cluster-operator in the same namespace as the Cluster Operator. The Cluster Operator uses leases to manage leader election. The RBAC resources provide the permissions to use the Lease resource. If you use a different Lease name or namespace, update the ClusterRole and RoleBinding files accordingly. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure Edit the Deployment resource that is used to deploy the Cluster Operator, which is defined in the 060-Deployment-strimzi-cluster-operator.yaml file. Change the replicas property from the default (1) to a value that matches the required number of replicas. Increasing the number of Cluster Operator replicas apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 3 Check that the leader election env properties are set. If they are not set, configure them. To enable leader election, STRIMZI_LEADER_ELECTION_ENABLED must be set to true (default). In this example, the name of the lease is changed to my-strimzi-cluster-operator . Configuring leader election environment variables for the Cluster Operator # ... spec containers: - name: strimzi-cluster-operator # ... env: - name: STRIMZI_LEADER_ELECTION_ENABLED value: "true" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: "my-strimzi-cluster-operator" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name For a description of the available environment variables, see Section 8.5.3.1, "Enabling leader election for Cluster Operator replicas" . If you specified a different name or namespace for the Lease resource used in leader election, update the RBAC resources. (optional) Edit the ClusterRole resource in the 022-ClusterRole-strimzi-cluster-operator-role.yaml file. Update resourceNames with the name of the Lease resource. Updating the ClusterRole references to the lease apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resourceNames: - my-strimzi-cluster-operator # ... (optional) Edit the RoleBinding resource in the 022-RoleBinding-strimzi-cluster-operator.yaml file. Update subjects.name and subjects.namespace with the name of the Lease resource and the namespace where it was created. Updating the RoleBinding references to the lease apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi subjects: - kind: ServiceAccount name: my-strimzi-cluster-operator namespace: myproject # ... Deploy the Cluster Operator: oc create -f install/cluster-operator -n myproject Check the status of the deployment: oc get deployments -n myproject Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows the correct number of replicas. 8.5.4. Configuring Cluster Operator HTTP proxy settings If you are running a Kafka cluster behind a HTTP proxy, you can still pass data in and out of the cluster. For example, you can run Kafka Connect with connectors that push and pull data from outside the proxy. Or you can use a proxy to connect with an authorization server. Configure the Cluster Operator deployment to specify the proxy environment variables. The Cluster Operator accepts standard proxy configuration ( HTTP_PROXY , HTTPS_PROXY and NO_PROXY ) as environment variables. The proxy settings are applied to all AMQ Streams containers. The format for a proxy address is http://<ip_address>:<port_number>. To set up a proxy with a name and password, the format is http://<username>:<password>@<ip-address>:<port_number>. Prerequisites You need an account with permission to create and manage CustomResourceDefinition and RBAC ( ClusterRole , and RoleBinding ) resources. Procedure To add proxy environment variables to the Cluster Operator, update its Deployment configuration ( install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml ). Example proxy configuration for the Cluster Operator apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: # ... - name: "HTTP_PROXY" value: "http://proxy.com" 1 - name: "HTTPS_PROXY" value: "https://proxy.com" 2 - name: "NO_PROXY" value: "internal.com, other.domain.com" 3 # ... 1 Address of the proxy server. 2 Secure address of the proxy server. 3 Addresses for servers that are accessed directly as exceptions to the proxy server. The URLs are comma-separated. Alternatively, edit the Deployment directly: oc edit deployment strimzi-cluster-operator If you updated the YAML file instead of editing the Deployment directly, apply the changes: oc create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml Additional resources Host aliases Designating AMQ Streams administrators 8.5.5. Disabling FIPS mode using Cluster Operator configuration AMQ Streams automatically switches to FIPS mode when running on a FIPS-enabled OpenShift cluster. Disable FIPS mode by setting the FIPS_MODE environment variable to disabled in the deployment configuration for the Cluster Operator. With FIPS mode disabled, AMQ Streams automatically disables FIPS in the OpenJDK for all components. With FIPS mode disabled, AMQ Streams is not FIPS compliant. The AMQ Streams operators, as well as all operands, run in the same way as if they were running on an OpenShift cluster without FIPS enabled. Procedure To disable the FIPS mode in the Cluster Operator, update its Deployment configuration ( install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml ) and add the FIPS_MODE environment variable. Example FIPS configuration for the Cluster Operator apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: # ... env: # ... - name: "FIPS_MODE" value: "disabled" 1 # ... 1 Disables the FIPS mode. Alternatively, edit the Deployment directly: oc edit deployment strimzi-cluster-operator If you updated the YAML file instead of editing the Deployment directly, apply the changes: oc apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml 8.6. Configuring Kafka Connect Update the spec properties of the KafkaConnect custom resource to configure your Kafka Connect deployment. Use Kafka Connect to set up external data connections to your Kafka cluster. Use the properties of the KafkaConnect resource to configure your Kafka Connect deployment. For a deeper understanding of the Kafka Connect cluster configuration options, refer to the AMQ Streams Custom Resource API Reference . KafkaConnector configuration KafkaConnector resources allow you to create and manage connector instances for Kafka Connect in an OpenShift-native way. In your Kafka Connect configuration, you enable KafkaConnectors for a Kafka Connect cluster by adding the strimzi.io/use-connector-resources annotation. You can also add a build configuration so that AMQ Streams automatically builds a container image with the connector plugins you require for your data connections. External configuration for Kafka Connect connectors is specified through the externalConfiguration property. To manage connectors, you can use use KafkaConnector custom resources or the Kafka Connect REST API. KafkaConnector resources must be deployed to the same namespace as the Kafka Connect cluster they link to. For more information on using these methods to create, reconfigure, or delete connectors, see Adding connectors . Connector configuration is passed to Kafka Connect as part of an HTTP request and stored within Kafka itself. ConfigMaps and Secrets are standard OpenShift resources used for storing configurations and confidential data. You can use ConfigMaps and Secrets to configure certain elements of a connector. You can then reference the configuration values in HTTP REST commands, which keeps the configuration separate and more secure, if needed. This method applies especially to confidential data, such as usernames, passwords, or certificates. Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . Example KafkaConnect custom resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 2 spec: replicas: 3 3 authentication: 4 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source bootstrapServers: my-cluster-kafka-bootstrap:9092 5 tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt config: 7 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 build: 8 output: 9 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 10 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/2.1.3.Final/debezium-connector-postgres-2.1.3.Final-plugin.tar.gz sha512sum: c4ddc97846de561755dc0b021a62aba656098829c70eb3ade3b817ce06d852ca12ae50c0281cc791a5a131cb7fc21fb15f4b8ee76c6cae5dd07f9c11cb7c6e79 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.11.5/camel-telegram-kafka-connector-0.11.5-package.tar.gz sha512sum: d6d9f45e0d1dbfcc9f6d1c7ca2046168c764389c78bc4b867dab32d24f710bb74ccf2a007d7d7a8af2dfca09d9a52ccbc2831fc715c195a3634cca055185bd91 externalConfiguration: 11 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey resources: 12 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 13 type: inline loggers: log4j.rootLogger: INFO readinessProbe: 14 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 15 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 16 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 17 rack: topologyKey: topology.kubernetes.io/zone 18 template: 19 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 20 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 21 1 Use KafkaConnect . 2 Enables KafkaConnectors for the Kafka Connect cluster. 3 The number of replica nodes for the workers that run tasks. 4 Authentication for the Kafka Connect cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. By default, Kafka Connect connects to Kafka brokers using a plain text connection. 5 Bootstrap server for connection to the Kafka cluster. 6 TLS encryption with key names under which TLS certificates are stored in X.509 format for the cluster. If certificates are stored in the same secret, it can be listed multiple times. 7 Kafka Connect configuration of workers (not connectors). Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams. 8 Build configuration properties for building a container image with connector plugins automatically. 9 (Required) Configuration of the container registry where new images are pushed. 10 (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one artifact . 11 External configuration for connectors using environment variables, as shown here, or volumes. You can also use configuration provider plugins to load configuration values from external sources. 12 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 13 Specified Kafka Connect loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. For the Kafka Connect log4j.rootLogger logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 14 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 15 Prometheus metrics, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . 16 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka Connect. 17 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 18 SPECIALIZED OPTION: Rack awareness configuration for the deployment. This is a specialized option intended for a deployment within the same location, not across regions. Use this option if you want connectors to consume from the closest replica rather than the leader replica. In certain cases, consuming from the closest replica can improve network utilization or reduce costs . The topologyKey must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standard topology.kubernetes.io/zone label. To consume from the closest replica, enable the RackAwareReplicaSelector in the Kafka broker configuration. 19 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 20 Environment variables are set for distributed tracing. 21 Distributed tracing is enabled by using OpenTelemetry. 8.6.1. Configuring Kafka Connect user authorization This procedure describes how to authorize user access to Kafka Connect. When any type of authorization is being used in Kafka, a Kafka Connect user requires read/write access rights to the consumer group and the internal topics of Kafka Connect. The properties for the consumer group and internal topics are automatically configured by AMQ Streams, or they can be specified explicitly in the spec of the KafkaConnect resource. Example configuration properties in the KafkaConnect resource apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # ... # ... 1 The Kafka Connect cluster ID within Kafka. 2 Kafka topic that stores connector offsets. 3 Kafka topic that stores connector and task status configurations. 4 Kafka topic that stores connector and task status updates. This procedure shows how access is provided when simple authorization is being used. Simple authorization uses ACL rules, handled by the Kafka AclAuthorizer plugin, to provide the right level of access. For more information on configuring a KafkaUser resource to use simple authorization, see the AclRule schema reference . Note The default values for the consumer group and topics will differ when running multiple instances . Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the authorization property in the KafkaUser resource to provide access rights to the user. In the following example, access rights are configured for the Kafka Connect topics and consumer group using literal name values: Property Name offset.storage.topic connect-cluster-offsets status.storage.topic connect-cluster-status config.storage.topic connect-cluster-configs group connect-cluster apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operations: - Create - Describe - Read - Write host: "*" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operations: - Create - Describe - Read - Write host: "*" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operations: - Create - Describe - Read - Write host: "*" # consumer group - resource: type: group name: connect-cluster patternType: literal operations: - Read host: "*" Create or update the resource. oc apply -f KAFKA-USER-CONFIG-FILE 8.7. Configuring Kafka MirrorMaker 2 Update the spec properties of the KafkaMirrorMaker2 custom resource to configure your MirrorMaker 2 deployment. MirrorMaker 2 uses source cluster configuration for data consumption and target cluster configuration for data output. MirrorMaker 2 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. You configure MirrorMaker 2 to define the Kafka Connect deployment, including the connection details of the source and target clusters, and then run a set of MirrorMaker 2 connectors to make the connection. MirrorMaker 2 supports topic configuration synchronization between the source and target clusters. You specify source topics in the MirrorMaker 2 configuration. MirrorMaker 2 monitors the source topics. MirrorMaker 2 detects and propagates changes to the source topics to the remote topics. Changes might include automatically creating missing topics and partitions. Note In most cases you write to local topics and read from remote topics. Though write operations are not prevented on remote topics, they should be avoided. The configuration must specify: Each Kafka cluster Connection information for each cluster, including authentication The replication flow and direction Cluster to cluster Topic to topic For a deeper understanding of the Kafka MirrorMaker 2 cluster configuration options, refer to the AMQ Streams Custom Resource API Reference . Note MirrorMaker 2 resource configuration differs from the version of MirrorMaker, which is now deprecated. There is currently no legacy support, so any resources must be manually converted into the new format. Default configuration MirrorMaker 2 provides default configuration values for properties such as replication factors. A minimal configuration, with defaults left unchanged, would be something like this example: Minimal configuration for MirrorMaker 2 apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.5.0 connectCluster: "my-cluster-target" clusters: - alias: "my-cluster-source" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: "my-cluster-target" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: {} You can configure access control for source and target clusters using mTLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and mTLS authentication for the source and target cluster. You can specify the topics and consumer groups you wish to replicate from a source cluster in the KafkaMirrorMaker2 resource. You use the topicsPattern and groupsPattern properties to do this. You can provide a list of names or use a regular expression. By default, all topics and consumer groups are replicated if you do not set the topicsPattern and groupsPattern properties. You can also replicate all topics and consumer groups by using ".*" as a regular expression. However, try to specify only the topics and consumer groups you need to avoid causing any unnecessary extra load on the cluster. Handling high volumes of messages You can tune the configuration to handle high volumes of messages. For more information, see Handling high volumes of messages . Example KafkaMirrorMaker2 custom resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.5.0 1 replicas: 3 2 connectCluster: "my-cluster-target" 3 clusters: 4 - alias: "my-cluster-source" 5 authentication: 6 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 7 tls: 8 trustedCertificates: - certificate: ca.crt secretName: my-cluster-source-cluster-ca-cert - alias: "my-cluster-target" 9 authentication: 10 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 11 config: 12 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 tls: 13 trustedCertificates: - certificate: ca.crt secretName: my-cluster-target-cluster-ca-cert mirrors: 14 - sourceCluster: "my-cluster-source" 15 targetCluster: "my-cluster-target" 16 sourceConnector: 17 tasksMax: 10 18 autoRestart: 19 enabled: true config: replication.factor: 1 20 offset-syncs.topic.replication.factor: 1 21 sync.topic.acls.enabled: "false" 22 refresh.topics.interval.seconds: 60 23 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" 24 heartbeatConnector: 25 autoRestart: enabled: true config: heartbeats.topic.replication.factor: 1 26 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" checkpointConnector: 27 autoRestart: enabled: true config: checkpoints.topic.replication.factor: 1 28 refresh.groups.interval.seconds: 600 29 sync.group.offsets.enabled: true 30 sync.group.offsets.interval.seconds: 60 31 emit.checkpoints.interval.seconds: 60 32 replication.policy.class: "org.apache.kafka.connect.mirror.IdentityReplicationPolicy" topicsPattern: "topic1|topic2|topic3" 33 groupsPattern: "group1|group2|group3" 34 resources: 35 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 36 type: inline loggers: connect.root.logger.level: INFO readinessProbe: 37 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 38 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 39 rack: topologyKey: topology.kubernetes.io/zone 40 template: 41 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 42 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 43 externalConfiguration: 44 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey 1 The Kafka Connect and Mirror Maker 2.0 version, which will always be the same. 2 The number of replica nodes for the workers that run tasks. 3 Kafka cluster alias for Kafka Connect, which must specify the target Kafka cluster. The Kafka cluster is used by Kafka Connect for its internal topics. 4 Specification for the Kafka clusters being synchronized. 5 Cluster alias for the source Kafka cluster. 6 Authentication for the source cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. 7 Bootstrap server for connection to the source Kafka cluster. 8 TLS encryption with key names under which TLS certificates are stored in X.509 format for the source Kafka cluster. If certificates are stored in the same secret, it can be listed multiple times. 9 Cluster alias for the target Kafka cluster. 10 Authentication for the target Kafka cluster is configured in the same way as for the source Kafka cluster. 11 Bootstrap server for connection to the target Kafka cluster. 12 Kafka Connect configuration. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams. 13 TLS encryption for the target Kafka cluster is configured in the same way as for the source Kafka cluster. 14 MirrorMaker 2 connectors. 15 Cluster alias for the source cluster used by the MirrorMaker 2 connectors. 16 Cluster alias for the target cluster used by the MirrorMaker 2 connectors. 17 Configuration for the MirrorSourceConnector that creates remote topics. The config overrides the default configuration options. 18 The maximum number of tasks that the connector may create. Tasks handle the data replication and run in parallel. If the infrastructure supports the processing overhead, increasing this value can improve throughput. Kafka Connect distributes the tasks between members of the cluster. If there are more tasks than workers, workers are assigned multiple tasks. For sink connectors, aim to have one task for each topic partition consumed. For source connectors, the number of tasks that can run in parallel may also depend on the external system. The connector creates fewer than the maximum number of tasks if it cannot achieve the parallelism. 19 Enables automatic restarts of failed connectors and tasks. Up to seven restart attempts are made, after which restarts must be made manually. 20 Replication factor for mirrored topics created at the target cluster. 21 Replication factor for the MirrorSourceConnector offset-syncs internal topic that maps the offsets of the source and target clusters. 22 When ACL rules synchronization is enabled, ACLs are applied to synchronized topics. The default is true . This feature is not compatible with the User Operator. If you are using the User Operator, set this property to false . 23 Optional setting to change the frequency of checks for new topics. The default is for a check every 10 minutes. 24 Adds a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is useful for active/passive backups and data migration. The property must be specified for all connectors. For bidirectional (active/active) replication, use the DefaultReplicationPolicy class to automatically rename remote topics and specify the replication.policy.separator property for all connectors to add a custom separator. 25 Configuration for the MirrorHeartbeatConnector that performs connectivity checks. The config overrides the default configuration options. 26 Replication factor for the heartbeat topic created at the target cluster. 27 Configuration for the MirrorCheckpointConnector that tracks offsets. The config overrides the default configuration options. 28 Replication factor for the checkpoints topic created at the target cluster. 29 Optional setting to change the frequency of checks for new consumer groups. The default is for a check every 10 minutes. 30 Optional setting to synchronize consumer group offsets, which is useful for recovery in an active/passive configuration. Synchronization is not enabled by default. 31 If the synchronization of consumer group offsets is enabled, you can adjust the frequency of the synchronization. 32 Adjusts the frequency of checks for offset tracking. If you change the frequency of offset synchronization, you might also need to adjust the frequency of these checks. 33 Topic replication from the source cluster defined as a comma-separated list or regular expression pattern. The source connector replicates the specified topics. The checkpoint connector tracks offsets for the specified topics. Here we request three topics by name. 34 Consumer group replication from the source cluster defined as a comma-separated list or regular expression pattern. The checkpoint connector replicates the specified consumer groups. Here we request three consumer groups by name. 35 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 36 Specified Kafka Connect loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. For the Kafka Connect log4j.rootLogger logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 37 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 38 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker. 39 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 40 SPECIALIZED OPTION: Rack awareness configuration for the deployment. This is a specialized option intended for a deployment within the same location, not across regions. Use this option if you want connectors to consume from the closest replica rather than the leader replica. In certain cases, consuming from the closest replica can improve network utilization or reduce costs . The topologyKey must match a node label containing the rack ID. The example used in this configuration specifies a zone using the standard topology.kubernetes.io/zone label. To consume from the closest replica, enable the RackAwareReplicaSelector in the Kafka broker configuration. 41 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 42 Environment variables are set for distributed tracing. 43 Distributed tracing is enabled by using OpenTelemetry. 44 External configuration for an OpenShift Secret mounted to Kafka MirrorMaker as an environment variable. You can also use configuration provider plugins to load configuration values from external sources. 8.7.1. Configuring active/active or active/passive modes You can use MirrorMaker 2 in active/passive or active/active cluster configurations. active/active cluster configuration An active/active configuration has two active clusters replicating data bidirectionally. Applications can use either cluster. Each cluster can provide the same data. In this way, you can make the same data available in different geographical locations. As consumer groups are active in both clusters, consumer offsets for replicated topics are not synchronized back to the source cluster. active/passive cluster configuration An active/passive configuration has an active cluster replicating data to a passive cluster. The passive cluster remains on standby. You might use the passive cluster for data recovery in the event of system failure. The expectation is that producers and consumers connect to active clusters only. A MirrorMaker 2 cluster is required at each target destination. 8.7.1.1. Bidirectional replication (active/active) The MirrorMaker 2 architecture supports bidirectional replication in an active/active cluster configuration. Each cluster replicates the data of the other cluster using the concept of source and remote topics. As the same topics are stored in each cluster, remote topics are automatically renamed by MirrorMaker 2 to represent the source cluster. The name of the originating cluster is prepended to the name of the topic. Figure 8.1. Topic renaming By flagging the originating cluster, topics are not replicated back to that cluster. The concept of replication through remote topics is useful when configuring an architecture that requires data aggregation. Consumers can subscribe to source and remote topics within the same cluster, without the need for a separate aggregation cluster. 8.7.1.2. Unidirectional replication (active/passive) The MirrorMaker 2 architecture supports unidirectional replication in an active/passive cluster configuration. You can use an active/passive cluster configuration to make backups or migrate data to another cluster. In this situation, you might not want automatic renaming of remote topics. You can override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration. With this configuration applied, topics retain their original names. 8.7.2. Configuring MirrorMaker 2 connectors Use MirrorMaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters. MirrorMaker 2 consists of the following connectors: MirrorSourceConnector The source connector replicates topics from a source cluster to a target cluster. It also replicates ACLs and is necessary for the MirrorCheckpointConnector to run. MirrorCheckpointConnector The checkpoint connector periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the source and target cluster. MirrorHeartbeatConnector The heartbeat connector periodically checks connectivity between the source and target cluster. The following table describes connector properties and the connectors you configure to use them. Table 8.2. MirrorMaker 2 connector configuration properties Property sourceConnector checkpointConnector heartbeatConnector admin.timeout.ms Timeout for admin tasks, such as detecting new topics. Default is 60000 (1 minute). [✓] [✓] [✓] replication.policy.class Policy to define the remote topic naming convention. Default is org.apache.kafka.connect.mirror.DefaultReplicationPolicy . [✓] [✓] [✓] replication.policy.separator The separator used for topic naming in the target cluster. By default, the separator is set to a dot (.). Separator configuration is only applicable to the DefaultReplicationPolicy replication policy class, which defines remote topic names. The IdentityReplicationPolicy class does not use the property as topics retain their original names. [✓] [✓] [✓] consumer.poll.timeout.ms Timeout when polling the source cluster. Default is 1000 (1 second). [✓] [✓] offset-syncs.topic.location The location of the offset-syncs topic, which can be the source (default) or target cluster. [✓] [✓] topic.filter.class Topic filter to select the topics to replicate. Default is org.apache.kafka.connect.mirror.DefaultTopicFilter . [✓] [✓] config.property.filter.class Topic filter to select the topic configuration properties to replicate. Default is org.apache.kafka.connect.mirror.DefaultConfigPropertyFilter . [✓] config.properties.exclude Topic configuration properties that should not be replicated. Supports comma-separated property names and regular expressions. [✓] offset.lag.max Maximum allowable (out-of-sync) offset lag before a remote partition is synchronized. Default is 100 . [✓] offset-syncs.topic.replication.factor Replication factor for the internal offset-syncs topic. Default is 3 . [✓] refresh.topics.enabled Enables check for new topics and partitions. Default is true . [✓] refresh.topics.interval.seconds Frequency of topic refresh. Default is 600 (10 minutes). By default, a check for new topics in the source cluster is made every 10 minutes. You can change the frequency by adding refresh.topics.interval.seconds to the source connector configuration. [✓] replication.factor The replication factor for new topics. Default is 2 . [✓] sync.topic.acls.enabled Enables synchronization of ACLs from the source cluster. Default is true . For more information, see Section 8.7.5, "Synchronizing ACL rules for remote topics" . [✓] sync.topic.acls.interval.seconds Frequency of ACL synchronization. Default is 600 (10 minutes). [✓] sync.topic.configs.enabled Enables synchronization of topic configuration from the source cluster. Default is true . [✓] sync.topic.configs.interval.seconds Frequency of topic configuration synchronization. Default 600 (10 minutes). [✓] checkpoints.topic.replication.factor Replication factor for the internal checkpoints topic. Default is 3 . [✓] emit.checkpoints.enabled Enables synchronization of consumer offsets to the target cluster. Default is true . [✓] emit.checkpoints.interval.seconds Frequency of consumer offset synchronization. Default is 60 (1 minute). [✓] group.filter.class Group filter to select the consumer groups to replicate. Default is org.apache.kafka.connect.mirror.DefaultGroupFilter . [✓] refresh.groups.enabled Enables check for new consumer groups. Default is true . [✓] refresh.groups.interval.seconds Frequency of consumer group refresh. Default is 600 (10 minutes). [✓] sync.group.offsets.enabled Enables synchronization of consumer group offsets to the target cluster __consumer_offsets topic. Default is false . [✓] sync.group.offsets.interval.seconds Frequency of consumer group offset synchronization. Default is 60 (1 minute). [✓] emit.heartbeats.enabled Enables connectivity checks on the target cluster. Default is true . [✓] emit.heartbeats.interval.seconds Frequency of connectivity checks. Default is 1 (1 second). [✓] heartbeats.topic.replication.factor Replication factor for the internal heartbeats topic. Default is 3 . [✓] 8.7.2.1. Changing the location of the consumer group offsets topic MirrorMaker 2 tracks offsets for consumer groups using internal topics. offset-syncs topic The offset-syncs topic maps the source and target offsets for replicated topic partitions from record metadata. checkpoints topic The checkpoints topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group. As they are used internally by MirrorMaker 2, you do not interact directly with these topics. MirrorCheckpointConnector emits checkpoints for offset tracking. Offsets for the checkpoints topic are tracked at predetermined intervals through configuration. Both topics enable replication to be fully restored from the correct offset position on failover. The location of the offset-syncs topic is the source cluster by default. You can use the offset-syncs.topic.location connector configuration to change this to the target cluster. You need read/write access to the cluster that contains the topic. Using the target cluster as the location of the offset-syncs topic allows you to use MirrorMaker 2 even if you have only read access to the source cluster. 8.7.2.2. Synchronizing consumer group offsets The __consumer_offsets topic stores information on committed offsets for each consumer group. Offset synchronization periodically transfers the consumer offsets for the consumer groups of a source cluster into the consumer offsets topic of a target cluster. Offset synchronization is particularly useful in an active/passive configuration. If the active cluster goes down, consumer applications can switch to the passive (standby) cluster and pick up from the last transferred offset position. To use topic offset synchronization, enable the synchronization by adding sync.group.offsets.enabled to the checkpoint connector configuration, and setting the property to true . Synchronization is disabled by default. When using the IdentityReplicationPolicy in the source connector, it also has to be configured in the checkpoint connector configuration. This ensures that the mirrored consumer offsets will be applied for the correct topics. Consumer offsets are only synchronized for consumer groups that are not active in the target cluster. If the consumer groups are in the target cluster, the synchronization cannot be performed and an UNKNOWN_MEMBER_ID error is returned. If enabled, the synchronization of offsets from the source cluster is made periodically. You can change the frequency by adding sync.group.offsets.interval.seconds and emit.checkpoints.interval.seconds to the checkpoint connector configuration. The properties specify the frequency in seconds that the consumer group offsets are synchronized, and the frequency of checkpoints emitted for offset tracking. The default for both properties is 60 seconds. You can also change the frequency of checks for new consumer groups using the refresh.groups.interval.seconds property, which is performed every 10 minutes by default. Because the synchronization is time-based, any switchover by consumers to a passive cluster will likely result in some duplication of messages. Note If you have an application written in Java, you can use the RemoteClusterUtils.java utility to synchronize offsets through the application. The utility fetches remote offsets for a consumer group from the checkpoints topic. 8.7.2.3. Deciding when to use the heartbeat connector The heartbeat connector emits heartbeats to check connectivity between source and target Kafka clusters. An internal heartbeat topic is replicated from the source cluster, which means that the heartbeat connector must be connected to the source cluster. The heartbeat topic is located on the target cluster, which allows it to do the following: Identify all source clusters it is mirroring data from Verify the liveness and latency of the mirroring process This helps to make sure that the process is not stuck or has stopped for any reason. While the heartbeat connector can be a valuable tool for monitoring the mirroring processes between Kafka clusters, it's not always necessary to use it. For example, if your deployment has low network latency or a small number of topics, you might prefer to monitor the mirroring process using log messages or other monitoring tools. If you decide not to use the heartbeat connector, simply omit it from your MirrorMaker 2 configuration. 8.7.2.4. Aligning the configuration of MirrorMaker 2 connectors To ensure that MirrorMaker 2 connectors work properly, make sure to align certain configuration settings across connectors. Specifically, ensure that the following properties have the same value across all applicable connectors: replication.policy.class replication.policy.separator offset-syncs.topic.location topic.filter.class For example, the value for replication.policy.class must be the same for the source, checkpoint, and heartbeat connectors. Mismatched or missing settings cause issues with data replication or offset syncing, so it's essential to keep all relevant connectors configured with the same settings. 8.7.3. Configuring MirrorMaker 2 connector producers and consumers MirrorMaker 2 connectors use internal producers and consumers. If needed, you can configure these producers and consumers to override the default settings. For example, you can increase the batch.size for the source producer that sends topics to the target Kafka cluster to better accommodate large volumes of messages. Important Producer and consumer configuration options depend on the MirrorMaker 2 implementation, and may be subject to change. The following tables describe the producers and consumers for each of the connectors and where you can add configuration. Table 8.3. Source connector producers and consumers Type Description Configuration Producer Sends topic messages to the target Kafka cluster. Consider tuning the configuration of this producer when it is handling large volumes of data. mirrors.sourceConnector.config: producer.override.* Producer Writes to the offset-syncs topic, which maps the source and target offsets for replicated topic partitions. mirrors.sourceConnector.config: producer.* Consumer Retrieves topic messages from the source Kafka cluster. mirrors.sourceConnector.config: consumer.* Table 8.4. Checkpoint connector producers and consumers Type Description Configuration Producer Emits consumer offset checkpoints. mirrors.checkpointConnector.config: producer.override.* Consumer Loads the offset-syncs topic. mirrors.checkpointConnector.config: consumer.* Note You can set offset-syncs.topic.location to target to use the target Kafka cluster as the location of the offset-syncs topic. Table 8.5. Heartbeat connector producer Type Description Configuration Producer Emits heartbeats. mirrors.heartbeatConnector.config: producer.override.* The following example shows how you configure the producers and consumers. Example configuration for connector producers and consumers apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.5.0 # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 5 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 producer.request.timeout.ms: 30000 consumer.fetch.max.bytes: 52428800 # ... checkpointConnector: config: producer.override.request.timeout.ms: 30000 consumer.max.poll.interval.ms: 300000 # ... heartbeatConnector: config: producer.override.request.timeout.ms: 30000 # ... 8.7.4. Specifying a maximum number of data replication tasks Connectors create the tasks that are responsible for moving data in and out of Kafka. Each connector comprises one or more tasks that are distributed across a group of worker pods that run the tasks. Increasing the number of tasks can help with performance issues when replicating a large number of partitions or synchronizing the offsets of a large number of consumer groups. Tasks run in parallel. Workers are assigned one or more tasks. A single task is handled by one worker pod, so you don't need more worker pods than tasks. If there are more tasks than workers, workers handle multiple tasks. You can specify the maximum number of connector tasks in your MirrorMaker configuration using the tasksMax property. Without specifying a maximum number of tasks, the default setting is a single task. The heartbeat connector always uses a single task. The number of tasks that are started for the source and checkpoint connectors is the lower value between the maximum number of possible tasks and the value for tasksMax . For the source connector, the maximum number of tasks possible is one for each partition being replicated from the source cluster. For the checkpoint connector, the maximum number of tasks possible is one for each consumer group being replicated from the source cluster. When setting a maximum number of tasks, consider the number of partitions and the hardware resources that support the process. If the infrastructure supports the processing overhead, increasing the number of tasks can improve throughput and latency. For example, adding more tasks reduces the time taken to poll the source cluster when there is a high number of partitions or consumer groups. Increasing the number of tasks for the source connector is useful when you have a large number of partitions. Increasing the number of tasks for the source connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: tasksMax: 10 # ... Increasing the number of tasks for the checkpoint connector is useful when you have a large number of consumer groups. Increasing the number of tasks for the checkpoint connector apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # ... mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" checkpointConnector: tasksMax: 10 # ... By default, MirrorMaker 2 checks for new consumer groups every 10 minutes. You can adjust the refresh.groups.interval.seconds configuration to change the frequency. Take care when adjusting lower. More frequent checks can have a negative impact on performance. 8.7.4.1. Checking connector task operations If you are using Prometheus and Grafana to monitor your deployment, you can check MirrorMaker 2 performance. The example MirrorMaker 2 Grafana dashboard provided with AMQ Streams shows the following metrics related to tasks and latency. The number of tasks Replication latency Offset synchronization latency Additional resources Chapter 20, Setting up metrics and dashboards for AMQ Streams 8.7.5. Synchronizing ACL rules for remote topics When using MirrorMaker 2 with AMQ Streams, it is possible to synchronize ACL rules for remote topics. However, this feature is only available if you are not using the User Operator. If you are using type: simple authorization without the User Operator, the ACL rules that manage access to brokers also apply to remote topics. This means that users who have read access to a source topic can also read its remote equivalent. Note OAuth 2.0 authorization does not support access to remote topics in this way. 8.7.6. Securing a Kafka MirrorMaker 2 deployment This procedure describes in outline the configuration required to secure a MirrorMaker 2 deployment. You need separate configuration for the source Kafka cluster and the target Kafka cluster. You also need separate user configuration to provide the credentials required for MirrorMaker to connect to the source and target Kafka clusters. For the Kafka clusters, you specify internal listeners for secure connections within an OpenShift cluster and external listeners for connections outside the OpenShift cluster. You can configure authentication and authorization mechanisms. The security options implemented for the source and target Kafka clusters must be compatible with the security options implemented for MirrorMaker 2. After you have created the cluster and user authentication credentials, you specify them in your MirrorMaker configuration for secure connections. Note In this procedure, the certificates generated by the Cluster Operator are used, but you can replace them by installing your own certificates . You can also configure your listener to use a Kafka listener certificate managed by an external CA (certificate authority) . Before you start Before starting this procedure, take a look at the example configuration files provided by AMQ Streams. They include examples for securing a deployment of MirrorMaker 2 using mTLS or SCRAM-SHA-512 authentication. The examples specify internal listeners for connecting within an OpenShift cluster. The examples provide the configuration for full authorization, including all the ACLs needed by MirrorMaker 2 to allow operations on the source and target Kafka clusters. Prerequisites AMQ Streams is running Separate namespaces for source and target clusters The procedure assumes that the source and target Kafka clusters are installed to separate namespaces If you want to use the Topic Operator, you'll need to do this. The Topic Operator only watches a single cluster in a specified namespace. By separating the clusters into namespaces, you will need to copy the cluster secrets so they can be accessed outside the namespace. You need to reference the secrets in the MirrorMaker configuration. Procedure Configure two Kafka resources, one to secure the source Kafka cluster and one to secure the target Kafka cluster. You can add listener configuration for authentication and enable authorization. In this example, an internal listener is configured for a Kafka cluster with TLS encryption and mTLS authentication. Kafka simple authorization is enabled. Example source Kafka cluster configuration with TLS encryption and mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-source-cluster spec: kafka: version: 3.5.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: "3.5" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} Example target Kafka cluster configuration with TLS encryption and mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-target-cluster spec: kafka: version: 3.5.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: "3.5" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {} Create or update the Kafka resources in separate namespaces. oc apply -f <kafka_configuration_file> -n <namespace> The Cluster Operator creates the listeners and sets up the cluster and client certificate authority (CA) certificates to enable authentication within the Kafka cluster. The certificates are created in the secret <cluster_name> -cluster-ca-cert . Configure two KafkaUser resources, one for a user of the source Kafka cluster and one for a user of the target Kafka cluster. Configure the same authentication and authorization types as the corresponding source and target Kafka cluster. For example, if you used tls authentication and the simple authorization type in the Kafka configuration for the source Kafka cluster, use the same in the KafkaUser configuration. Configure the ACLs needed by MirrorMaker 2 to allow operations on the source and target Kafka clusters. The ACLs are used by the internal MirrorMaker connectors, and by the underlying Kafka Connect framework. Example source user configuration for mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-source-user labels: strimzi.io/cluster: my-source-cluster spec: authentication: type: tls authorization: type: simple acls: # MirrorSourceConnector - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Create - DescribeConfigs - Read - Write - resource: # Needed for every topic which is mirrored type: topic name: "*" operations: - DescribeConfigs - Read # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: # Needed for every group for which offsets are synced type: group name: "*" operations: - Describe - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Read Example target user configuration for mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-target-user labels: strimzi.io/cluster: my-target-cluster spec: authentication: type: tls authorization: type: simple acls: # Underlying Kafka Connect internal topics to store configuration, offsets, or status - resource: type: group name: mirrormaker2-cluster operations: - Read - resource: type: topic name: mirrormaker2-cluster-configs operations: - Create - Describe - DescribeConfigs - Read - Write - resource: type: topic name: mirrormaker2-cluster-status operations: - Create - Describe - DescribeConfigs - Read - Write - resource: type: topic name: mirrormaker2-cluster-offsets operations: - Create - Describe - DescribeConfigs - Read - Write # MirrorSourceConnector - resource: # Needed for every topic which is mirrored type: topic name: "*" operations: - Create - Alter - AlterConfigs - Write # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: type: topic name: my-source-cluster.checkpoints.internal operations: - Create - Describe - Read - Write - resource: # Needed for every group for which the offset is synced type: group name: "*" operations: - Read - Describe # MirrorHeartbeatConnector - resource: type: topic name: heartbeats operations: - Create - Describe - Write Note You can use a certificate issued outside the User Operator by setting type to tls-external . For more information, see the KafkaUserSpec schema reference . Create or update a KafkaUser resource in each of the namespaces you created for the source and target Kafka clusters. oc apply -f <kafka_user_configuration_file> -n <namespace> The User Operator creates the users representing the client (MirrorMaker), and the security credentials used for client authentication, based on the chosen authentication type. The User Operator creates a new secret with the same name as the KafkaUser resource. The secret contains a private and public key for mTLS authentication. The public key is contained in a user certificate, which is signed by the clients CA. Configure a KafkaMirrorMaker2 resource with the authentication details to connect to the source and target Kafka clusters. Example MirrorMaker 2 configuration with TLS encryption and mTLS authentication apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker-2 spec: version: 3.5.0 replicas: 1 connectCluster: "my-target-cluster" clusters: - alias: "my-source-cluster" bootstrapServers: my-source-cluster-kafka-bootstrap:9093 tls: 1 trustedCertificates: - secretName: my-source-cluster-cluster-ca-cert certificate: ca.crt authentication: 2 type: tls certificateAndKey: secretName: my-source-user certificate: user.crt key: user.key - alias: "my-target-cluster" bootstrapServers: my-target-cluster-kafka-bootstrap:9093 tls: 3 trustedCertificates: - secretName: my-target-cluster-cluster-ca-cert certificate: ca.crt authentication: 4 type: tls certificateAndKey: secretName: my-target-user certificate: user.crt key: user.key config: # -1 means it will use the default replication factor configured in the broker config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 mirrors: - sourceCluster: "my-source-cluster" targetCluster: "my-target-cluster" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: "false" heartbeatConnector: config: heartbeats.topic.replication.factor: 1 checkpointConnector: config: checkpoints.topic.replication.factor: 1 sync.group.offsets.enabled: "true" topicsPattern: "topic1|topic2|topic3" groupsPattern: "group1|group2|group3" 1 The TLS certificates for the source Kafka cluster. If they are in a separate namespace, copy the cluster secrets from the namespace of the Kafka cluster. 2 The user authentication for accessing the source Kafka cluster using the TLS mechanism. 3 The TLS certificates for the target Kafka cluster. 4 The user authentication for accessing the target Kafka cluster. Create or update the KafkaMirrorMaker2 resource in the same namespace as the target Kafka cluster. oc apply -f <mirrormaker2_configuration_file> -n <namespace_of_target_cluster> 8.8. Configuring Kafka MirrorMaker (deprecated) Update the spec properties of the KafkaMirrorMaker custom resource to configure your Kafka MirrorMaker deployment. You can configure access control for producers and consumers using TLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and mTLS authentication on the consumer and producer side. For a deeper understanding of the Kafka MirrorMaker cluster configuration options, refer to the AMQ Streams Custom Resource API Reference . Important Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in AMQ Streams as well. The KafkaMirrorMaker resource will be removed from AMQ Streams when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy . Example KafkaMirrorMaker custom resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: "my-group" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert certificate: ca.crt authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 9 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert certificate: ca.crt authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 include: "my-topic|other-topic" 10 resources: 11 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 12 type: inline loggers: mirrormaker.root.logger: INFO readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 15 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 16 template: 17 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" mirrorMakerContainer: 18 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: 19 type: opentelemetry 1 The number of replica nodes. 2 Bootstrap servers for consumer and producer. 3 Group ID for the consumer. 4 The number of consumer streams. 5 The offset auto-commit interval in milliseconds. 6 TLS encryption with key names under which TLS certificates are stored in X.509 format for consumer or producer. If certificates are stored in the same secret, it can be listed multiple times. 7 Authentication for consumer or producer, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. 8 Kafka configuration options for consumer and producer. 9 If the abortOnSendFailure property is set to true , Kafka MirrorMaker will exit and the container will restart following a send failure for a message. 10 A list of included topics mirrored from source to target Kafka cluster. 11 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 12 Specified loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. MirrorMaker has a single logger called mirrormaker.root.logger . You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 13 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 14 Prometheus metrics, which are enabled by referencing a ConfigMap containing configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using a reference to a ConfigMap containing an empty file under metricsConfig.valueFrom.configMapKeyRef.key . 15 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker. 16 ADVANCED OPTION: Container image configuration, which is recommended only in special situations. 17 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 18 Environment variables are set for distributed tracing. 19 Distributed tracing is enabled by using OpenTelemetry. Warning With the abortOnSendFailure property set to false , the producer attempts to send the message in a topic. The original message might be lost, as there is no attempt to resend a failed message. 8.9. Configuring the Kafka Bridge Update the spec properties of the KafkaBridge custom resource to configure your Kafka Bridge deployment. In order to prevent issues arising when client consumer requests are processed by different Kafka Bridge instances, address-based routing must be employed to ensure that requests are routed to the right Kafka Bridge instance. Additionally, each independent Kafka Bridge instance must have a replica. A Kafka Bridge instance has its own state which is not shared with another instances. For a deeper understanding of the Kafka Bridge cluster configuration options, refer to the AMQ Streams Custom Resource API Reference . Example KafkaBridge custom resource configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: replicas: 3 1 bootstrapServers: <cluster_name> -cluster-kafka-bootstrap:9092 2 tls: 3 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt authentication: 4 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key http: 5 port: 8080 cors: 6 allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" consumer: 7 config: auto.offset.reset: earliest producer: 8 config: delivery.timeout.ms: 300000 resources: 9 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 10 type: inline loggers: logger.bridge.level: INFO # enabling DEBUG just for send operation logger.send.name: "http.openapi.operation.send" logger.send.level: DEBUG jvmOptions: 11 "-Xmx": "1g" "-Xms": "1g" readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 image: my-org/my-image:latest 13 template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" bridgeContainer: 15 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: "http://otlp-host:4317" tracing: type: opentelemetry 16 1 The number of replica nodes. 2 Bootstrap server for connection to the target Kafka cluster. Use the name of the Kafka cluster as the <cluster_name> . 3 TLS encryption with key names under which TLS certificates are stored in X.509 format for the source Kafka cluster. If certificates are stored in the same secret, it can be listed multiple times. 4 Authentication for the Kafka Bridge cluster, specified as mTLS, token-based OAuth, SASL-based SCRAM-SHA-256/SCRAM-SHA-512, or PLAIN. By default, the Kafka Bridge connects to Kafka brokers without authentication. 5 HTTP access to Kafka brokers. 6 CORS access specifying selected resources and access methods. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster. 7 Consumer configuration options. 8 Producer configuration options. 9 Requests for reservation of supported resources, currently cpu and memory , and limits to specify the maximum resources that can be consumed. 10 Specified Kafka Bridge loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom Log4j configuration must be placed under the log4j.properties or log4j2.properties key in the ConfigMap. For the Kafka Bridge loggers, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 11 JVM configuration options to optimize performance for the Virtual Machine (VM) running the Kafka Bridge. 12 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 13 Optional: Container image configuration, which is recommended only in special situations. 14 Template customization. Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 15 Environment variables are set for distributed tracing. 16 Distributed tracing is enabled by using OpenTelemetry. Additional resources Using the AMQ Streams Kafka Bridge 8.10. Configuring Kafka and ZooKeeper storage As stateful applications, Kafka and ZooKeeper store data on disk. AMQ Streams supports three storage types for this data: Ephemeral (Recommended for development only) Persistent JBOD ( Kafka only not ZooKeeper) When configuring a Kafka resource, you can specify the type of storage used by the Kafka broker and its corresponding ZooKeeper node. You configure the storage type using the storage property in the following resources: Kafka.spec.kafka Kafka.spec.zookeeper The storage type is configured in the type field. Refer to the schema reference for more information on storage configuration properties: EphemeralStorage schema reference PersistentClaimStorage schema reference JbodStorage schema reference Warning The storage type cannot be changed after a Kafka cluster is deployed. 8.10.1. Data storage considerations For AMQ Streams to work well, an efficient data storage infrastructure is essential. We strongly recommend using block storage. AMQ Streams is only tested for use with block storage. File storage, such as NFS, is not tested and there is no guarantee it will work. Choose one of the following options for your block storage: A cloud-based block storage solution, such as Amazon Elastic Block Store (EBS) Persistent storage using local persistent volumes Storage Area Network (SAN) volumes accessed by a protocol such as Fibre Channel or iSCSI Note AMQ Streams does not require OpenShift raw block volumes. 8.10.1.1. File systems Kafka uses a file system for storing messages. AMQ Streams is compatible with the XFS and ext4 file systems, which are commonly used with Kafka. Consider the underlying architecture and requirements of your deployment when choosing and setting up your file system. For more information, refer to Filesystem Selection in the Kafka documentation. 8.10.1.2. Disk usage Use separate disks for Apache Kafka and ZooKeeper. Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. SSDs are particularly effective with ZooKeeper, which requires fast, low latency data access. Note You do not need to provision replicated storage because Kafka and ZooKeeper both have built-in data replication. 8.10.2. Ephemeral storage Ephemeral data storage is transient. All pods on a node share a local ephemeral storage space. Data is retained for as long as the pod that uses it is running. The data is lost when a pod is deleted. Although a pod can recover data in a highly available environment. Because of its transient nature, ephemeral storage is only recommended for development and testing. Ephemeral storage uses emptyDir volumes to store data. An emptyDir volume is created when a pod is assigned to a node. You can set the total amount of storage for the emptyDir using the sizeLimit property . Important Ephemeral storage is not suitable for single-node ZooKeeper clusters or Kafka topics with a replication factor of 1. To use ephemeral storage, you set the storage type configuration in the Kafka or ZooKeeper resource to ephemeral . Example ephemeral storage configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: ephemeral # ... zookeeper: # ... storage: type: ephemeral # ... 8.10.2.1. Mount path of Kafka log directories The ephemeral volume is used by Kafka brokers as log directories mounted into the following path: /var/lib/kafka/data/kafka-log IDX Where IDX is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0 . 8.10.3. Persistent storage Persistent data storage retains data in the event of system disruption. For pods that use persistent data storage, data is persisted across pod failures and restarts. A dynamic provisioning framework enables clusters to be created with persistent storage. Pod configuration uses Persistent Volume Claims (PVCs) to make storage requests on persistent volumes (PVs). PVs are storage resources that represent a storage volume. PVs are independent of the pods that use them. The PVC requests the amount of storage required when a pod is being created. The underlying storage infrastructure of the PV does not need to be understood. If a PV matches the storage criteria, the PVC is bound to the PV. Because of its permanent nature, persistent storage is recommended for production. PVCs can request different types of persistent storage by specifying a StorageClass . Storage classes define storage profiles and dynamically provision PVs. If a storage class is not specified, the default storage class is used. Persistent storage options might include SAN storage types or local persistent volumes . To use persistent storage, you set the storage type configuration in the Kafka or ZooKeeper resource to persistent-claim . In the production environment, the following configuration is recommended: For Kafka, configure type: jbod with one or more type: persistent-claim volumes For ZooKeeper, configure type: persistent-claim Persistent storage also has the following configuration options: id (optional) A storage identification number. This option is mandatory for storage volumes defined in a JBOD storage declaration. Default is 0 . size (required) The size of the persistent volume claim, for example, "1000Gi". class (optional) The OpenShift StorageClass to use for dynamic volume provisioning. Storage class configuration includes parameters that describe the profile of a volume in detail. selector (optional) Configuration to specify a specific PV. Provides key:value pairs representing the labels of the volume selected. deleteClaim (optional) Boolean value to specify whether the PVC is deleted when the cluster is uninstalled. Default is false . Warning Increasing the size of persistent volumes in an existing AMQ Streams cluster is only supported in OpenShift versions that support persistent volume resizing. The persistent volume to be resized must use a storage class that supports volume expansion. For other versions of OpenShift and storage classes that do not support volume expansion, you must decide the necessary storage size before deploying the cluster. Decreasing the size of existing persistent volumes is not possible. Example persistent storage configuration for Kafka and ZooKeeper # ... spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: storage: type: persistent-claim size: 1000Gi # ... If you do not specify a storage class, the default is used. The following example specifies a storage class. Example persistent storage configuration with specific storage class # ... storage: type: persistent-claim size: 1Gi class: my-storage-class # ... Use a selector to specify a labeled persistent volume that provides certain features, such as an SSD. Example persistent storage configuration with selector # ... storage: type: persistent-claim size: 1Gi selector: hdd-type: ssd deleteClaim: true # ... 8.10.3.1. Storage class overrides Instead of using the default storage class, you can specify a different storage class for one or more Kafka brokers or ZooKeeper nodes. This is useful, for example, when storage classes are restricted to different availability zones or data centers. You can use the overrides field for this purpose. In this example, the default storage class is named my-storage-class : Example AMQ Streams cluster using storage class overrides apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # ... kafka: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ... # ... zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ... As a result of the configured overrides property, the volumes use the following storage classes: The persistent volumes of ZooKeeper node 0 use my-storage-class-zone-1a . The persistent volumes of ZooKeeper node 1 use my-storage-class-zone-1b . The persistent volumes of ZooKeeepr node 2 use my-storage-class-zone-1c . The persistent volumes of Kafka broker 0 use my-storage-class-zone-1a . The persistent volumes of Kafka broker 1 use my-storage-class-zone-1b . The persistent volumes of Kafka broker 2 use my-storage-class-zone-1c . The overrides property is currently used only to override storage class configurations. Overrides for other storage configuration properties is not currently supported. Other storage configuration properties are currently not supported. 8.10.3.2. PVC resources for persistent storage When persistent storage is used, it creates PVCs with the following names: data- cluster-name -kafka- idx PVC for the volume used for storing data for the Kafka broker pod idx . data- cluster-name -zookeeper- idx PVC for the volume used for storing data for the ZooKeeper node pod idx . 8.10.3.3. Mount path of Kafka log directories The persistent volume is used by the Kafka brokers as log directories mounted into the following path: /var/lib/kafka/data/kafka-log IDX Where IDX is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0 . 8.10.4. Resizing persistent volumes Persistent volumes used by a cluster can be resized without any risk of data loss, as long as the storage infrastructure supports it. Following a configuration update to change the size of the storage, AMQ Streams instructs the storage infrastructure to make the change. Storage expansion is supported in AMQ Streams clusters that use persistent-claim volumes. Storage reduction is only possible when using multiple disks per broker. You can remove a disk after moving all partitions on the disk to other volumes within the same broker (intra-broker) or to other brokers within the same cluster (intra-cluster). Important You cannot decrease the size of persistent volumes because it is not currently supported in OpenShift. Prerequisites An OpenShift cluster with support for volume resizing. The Cluster Operator is running. A Kafka cluster using persistent volumes created using a storage class that supports volume expansion. Procedure Edit the Kafka resource for your cluster. Change the size property to increase the size of the persistent volume allocated to a Kafka cluster, a ZooKeeper cluster, or both. For Kafka clusters, update the size property under spec.kafka.storage . For ZooKeeper clusters, update the size property under spec.zookeeper.storage . Kafka configuration to increase the volume size to 2000Gi apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: persistent-claim size: 2000Gi class: my-storage-class # ... zookeeper: # ... Create or update the resource: oc apply -f <kafka_configuration_file> OpenShift increases the capacity of the selected persistent volumes in response to a request from the Cluster Operator. When the resizing is complete, the Cluster Operator restarts all pods that use the resized persistent volumes. This happens automatically. Verify that the storage capacity has increased for the relevant pods on the cluster: oc get pv Kafka broker pods with increased storage NAME CAPACITY CLAIM pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-kafka-2 pvc-6e1810be-... 2000Gi my-project/data-my-cluster-kafka-0 pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-kafka-1 The output shows the names of each PVC associated with a broker pod. Additional resources For more information about resizing persistent volumes in OpenShift, see Resizing Persistent Volumes using Kubernetes . 8.10.5. JBOD storage You can configure AMQ Streams to use JBOD, a data storage configuration of multiple disks or volumes. JBOD is one approach to providing increased data storage for Kafka brokers. It can also improve performance. Note JBOD storage is supported for Kafka only not ZooKeeper. A JBOD configuration is described by one or more volumes, each of which can be either ephemeral or persistent . The rules and constraints for JBOD volume declarations are the same as those for ephemeral and persistent storage. For example, you cannot decrease the size of a persistent storage volume after it has been provisioned, or you cannot change the value of sizeLimit when the type is ephemeral . To use JBOD storage, you set the storage type configuration in the Kafka resource to jbod . The volumes property allows you to describe the disks that make up your JBOD storage array or configuration. Example JBOD storage configuration # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false # ... The IDs cannot be changed once the JBOD volumes are created. You can add or remove volumes from the JBOD configuration. 8.10.5.1. PVC resource for JBOD storage When persistent storage is used to declare JBOD volumes, it creates a PVC with the following name: data- id - cluster-name -kafka- idx PVC for the volume used for storing data for the Kafka broker pod idx . The id is the ID of the volume used for storing data for Kafka broker pod. 8.10.5.2. Mount path of Kafka log directories The JBOD volumes are used by Kafka brokers as log directories mounted into the following path: /var/lib/kafka/data- id /kafka-log idx Where id is the ID of the volume used for storing data for Kafka broker pod idx . For example /var/lib/kafka/data-0/kafka-log0 . 8.10.6. Adding volumes to JBOD storage This procedure describes how to add volumes to a Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. Note When adding a new volume under an id which was already used in the past and removed, you have to make sure that the previously used PersistentVolumeClaims have been deleted. Prerequisites An OpenShift cluster A running Cluster Operator A Kafka cluster with JBOD storage Procedure Edit the spec.kafka.storage.volumes property in the Kafka resource. Add the new volumes to the volumes array. For example, add the new volume with id 2 : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ... Create or update the resource: oc apply -f <kafka_configuration_file> Create new topics or reassign existing partitions to the new disks. Tip Cruise Control is an effective tool for reassigning partitions. To perform an intra-broker disk balance, you set rebalanceDisk to true under the KafkaRebalance.spec . 8.10.7. Removing volumes from JBOD storage This procedure describes how to remove volumes from Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. The JBOD storage always has to contain at least one volume. Important To avoid data loss, you have to move all partitions before removing the volumes. Prerequisites An OpenShift cluster A running Cluster Operator A Kafka cluster with JBOD storage with two or more volumes Procedure Reassign all partitions from the disks which are you going to remove. Any data in partitions still assigned to the disks which are going to be removed might be lost. Tip You can use the kafka-reassign-partitions.sh tool to reassign the partitions. Edit the spec.kafka.storage.volumes property in the Kafka resource. Remove one or more volumes from the volumes array. For example, remove the volumes with ids 1 and 2 : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ... Create or update the resource: oc apply -f <kafka_configuration_file> 8.11. Configuring CPU and memory resource limits and requests By default, the AMQ Streams Cluster Operator does not specify CPU and memory resource requests and limits for its deployed operands. Ensuring an adequate allocation of resources is crucial for maintaining stability and achieving optimal performance in Kafka. The ideal resource allocation depends on your specific requirements and use cases. It is recommended to configure CPU and memory resources for each container by setting appropriate requests and limits . 8.12. Configuring pod scheduling To avoid performance degradation caused by resource conflicts between applications scheduled on the same OpenShift node, you can schedule Kafka pods separately from critical workloads. This can be achieved by either selecting specific nodes or dedicating a set of nodes exclusively for Kafka. 8.12.1. Specifying affinity, tolerations, and topology spread constraints Use affinity, tolerations and topology spread constraints to schedule the pods of kafka resources onto nodes. Affinity, tolerations and topology spread constraints are configured using the affinity , tolerations , and topologySpreadConstraint properties in following resources: Kafka.spec.kafka.template.pod Kafka.spec.zookeeper.template.pod Kafka.spec.entityOperator.template.pod KafkaConnect.spec.template.pod KafkaBridge.spec.template.pod KafkaMirrorMaker.spec.template.pod KafkaMirrorMaker2.spec.template.pod The format of the affinity , tolerations , and topologySpreadConstraint properties follows the OpenShift specification. The affinity configuration can include different types of affinity: Pod affinity and anti-affinity Node affinity Additional resources Kubernetes node and pod affinity documentation Kubernetes taints and tolerations Controlling pod placement by using pod topology spread constraints 8.12.1.1. Use pod anti-affinity to avoid critical applications sharing nodes Use pod anti-affinity to ensure that critical applications are never scheduled on the same disk. When running a Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share nodes with other workloads, such as databases. 8.12.1.2. Use node affinity to schedule workloads onto specific nodes The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes. OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type or custom labels to select the right node. 8.12.1.3. Use node affinity and tolerations for dedicated nodes Use taints to create dedicated nodes, then schedule Kafka pods on the dedicated nodes by configuring node affinity and tolerations. Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability. 8.12.2. Configuring pod anti-affinity to schedule each Kafka broker on a different worker node Many Kafka brokers or ZooKeeper nodes can run on the same OpenShift worker node. If the worker node fails, they will all become unavailable at the same time. To improve reliability, you can use podAntiAffinity configuration to schedule each Kafka broker or ZooKeeper node on a different OpenShift worker node. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the affinity property in the resource specifying the cluster deployment. To make sure that no worker nodes are shared by Kafka brokers or ZooKeeper nodes, use the strimzi.io/name label. Set the topologyKey to kubernetes.io/hostname to specify that the selected pods are not scheduled on nodes with the same hostname. This will still allow the same worker node to be shared by a single Kafka broker and a single ZooKeeper node. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -kafka topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -zookeeper topologyKey: "kubernetes.io/hostname" # ... Where CLUSTER-NAME is the name of your Kafka custom resource. If you even want to make sure that a Kafka broker and ZooKeeper node do not share the same worker node, use the strimzi.io/cluster label. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: "kubernetes.io/hostname" # ... Where CLUSTER-NAME is the name of your Kafka custom resource. Create or update the resource. oc apply -f <kafka_configuration_file> 8.12.3. Configuring pod anti-affinity in Kafka components Pod anti-affinity configuration helps with the stability and performance of Kafka brokers. By using podAntiAffinity , OpenShift will not schedule Kafka brokers on the same nodes as other workloads. Typically, you want to avoid Kafka running on the same worker node as other network or storage intensive applications such as databases, storage or other messaging platforms. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the affinity property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. The topologyKey should be set to kubernetes.io/hostname to specify that the selected pods should not be scheduled on nodes with the same hostname. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f <kafka_configuration_file> 8.12.4. Configuring node affinity in Kafka components Prerequisites An OpenShift cluster A running Cluster Operator Procedure Label the nodes where AMQ Streams components should be scheduled. This can be done using oc label : oc label node NAME-OF-NODE node-type=fast-network Alternatively, some of the existing labels might be reused. Edit the affinity property in the resource specifying the cluster deployment. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f <kafka_configuration_file> 8.12.5. Setting up dedicated nodes and scheduling pods on them Prerequisites An OpenShift cluster A running Cluster Operator Procedure Select the nodes which should be used as dedicated. Make sure there are no workloads scheduled on these nodes. Set the taints on the selected nodes: This can be done using oc adm taint : oc adm taint node NAME-OF-NODE dedicated=Kafka:NoSchedule Additionally, add a label to the selected nodes as well. This can be done using oc label : oc label node NAME-OF-NODE dedicated=Kafka Edit the affinity and tolerations properties in the resource specifying the cluster deployment. For example: apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... template: pod: tolerations: - key: "dedicated" operator: "Equal" value: "Kafka" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f <kafka_configuration_file> 8.13. Configuring logging levels Configure logging levels in the custom resources of Kafka components and AMQ Streams operators. You can specify the logging levels directly in the spec.logging property of the custom resource. Or you can define the logging properties in a ConfigMap that's referenced in the custom resource using the configMapKeyRef property. The advantages of using a ConfigMap are that the logging properties are maintained in one place and are accessible to more than one resource. You can also reuse the ConfigMap for more than one resource. If you are using a ConfigMap to specify loggers for AMQ Streams Operators, you can also append the logging specification to add filters. You specify a logging type in your logging specification: inline when specifying logging levels directly external when referencing a ConfigMap Example inline logging configuration spec: # ... logging: type: inline loggers: kafka.root.logger.level: INFO Example external logging configuration spec: # ... logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key Values for the name and key of the ConfigMap are mandatory. Default logging is used if the name or key is not set. 8.13.1. Logging options for Kafka components and operators For more information on configuring logging for specific Kafka components or operators, see the following sections. Kafka component logging Kafka logging ZooKeeper logging Kafka Connect and Mirror Maker 2.0 logging MirrorMaker logging Kafka Bridge logging Cruise Control logging Operator logging Cluster Operator logging Topic Operator logging User Operator logging 8.13.2. Creating a ConfigMap for logging To use a ConfigMap to define logging properties, you create the ConfigMap and then reference it as part of the logging definition in the spec of a resource. The ConfigMap must contain the appropriate logging configuration. log4j.properties for Kafka components, ZooKeeper, and the Kafka Bridge log4j2.properties for the Topic Operator and User Operator The configuration must be placed under these properties. In this procedure a ConfigMap defines a root logger for a Kafka resource. Procedure Create the ConfigMap. You can create the ConfigMap as a YAML file or from a properties file. ConfigMap example with a root logger definition for Kafka: kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level="INFO" If you are using a properties file, specify the file at the command line: oc create configmap logging-configmap --from-file=log4j.properties The properties file defines the logging configuration: # Define the logger kafka.root.logger.level="INFO" # ... Define external logging in the spec of the resource, setting the logging.valueFrom.configMapKeyRef.name to the name of the ConfigMap and logging.valueFrom.configMapKeyRef.key to the key in this ConfigMap. spec: # ... logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j.properties Create or update the resource. oc apply -f <kafka_configuration_file> 8.13.3. Configuring Cluster Operator logging Cluster Operator logging is configured through a ConfigMap named strimzi-cluster-operator . A ConfigMap containing logging configuration is created when installing the Cluster Operator. This ConfigMap is described in the file install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml . You configure Cluster Operator logging by changing the data.log4j2.properties values in this ConfigMap . To update the logging configuration, you can edit the 050-ConfigMap-strimzi-cluster-operator.yaml file and then run the following command: oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml Alternatively, edit the ConfigMap directly: oc edit configmap strimzi-cluster-operator With this ConfigMap, you can control various aspects of logging, including the root logger level, log output format, and log levels for different components. The monitorInterval setting, determines how often the logging configuration is reloaded. You can also control the logging levels for the Kafka AdminClient , ZooKeeper ZKTrustManager , Netty, and the OkHttp client. Netty is a framework used in AMQ Streams for network communication, and OkHttp is a library used for making HTTP requests. If the ConfigMap is missing when the Cluster Operator is deployed, the default logging values are used. If the ConfigMap is accidentally deleted after the Cluster Operator is deployed, the most recently loaded logging configuration is used. Create a new ConfigMap to load a new logging configuration. Note Do not remove the monitorInterval option from the ConfigMap . 8.13.4. Adding logging filters to AMQ Streams operators If you are using a ConfigMap to configure the (log4j2) logging levels for AMQ Streams operators, you can also define logging filters to limit what's returned in the log. Logging filters are useful when you have a large number of logging messages. Suppose you set the log level for the logger as DEBUG ( rootLogger.level="DEBUG" ). Logging filters reduce the number of logs returned for the logger at that level, so you can focus on a specific resource. When the filter is set, only log messages matching the filter are logged. Filters use markers to specify what to include in the log. You specify a kind, namespace and name for the marker. For example, if a Kafka cluster is failing, you can isolate the logs by specifying the kind as Kafka , and use the namespace and name of the failing cluster. This example shows a marker filter for a Kafka cluster named my-kafka-cluster . Basic logging filter configuration rootLogger.level="INFO" appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4 1 The MarkerFilter type compares a specified marker for filtering. 2 The onMatch property accepts the log if the marker matches. 3 The onMismatch property rejects the log if the marker does not match. 4 The marker used for filtering is in the format KIND(NAMESPACE/NAME-OF-RESOURCE) . You can create one or more filters. Here, the log is filtered for two Kafka clusters. Multiple logging filter configuration appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster-1) appender.console.filter.filter2.type=MarkerFilter appender.console.filter.filter2.onMatch=ACCEPT appender.console.filter.filter2.onMismatch=DENY appender.console.filter.filter2.marker=Kafka(my-namespace/my-kafka-cluster-2) Adding filters to the Cluster Operator To add filters to the Cluster Operator, update its logging ConfigMap YAML file ( install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml ). Procedure Update the 050-ConfigMap-strimzi-cluster-operator.yaml file to add the filter properties to the ConfigMap. In this example, the filter properties return logs only for the my-kafka-cluster Kafka cluster: kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator data: log4j2.properties: #... appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) Alternatively, edit the ConfigMap directly: oc edit configmap strimzi-cluster-operator If you updated the YAML file instead of editing the ConfigMap directly, apply the changes by deploying the ConfigMap: oc create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml Adding filters to the Topic Operator or User Operator To add filters to the Topic Operator or User Operator, create or edit a logging ConfigMap. In this procedure a logging ConfigMap is created with filters for the Topic Operator. The same approach is used for the User Operator. Procedure Create the ConfigMap. You can create the ConfigMap as a YAML file or from a properties file. In this example, the filter properties return logs only for the my-topic topic: kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j2.properties: rootLogger.level="INFO" appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic) If you are using a properties file, specify the file at the command line: oc create configmap logging-configmap --from-file=log4j2.properties The properties file defines the logging configuration: # Define the logger rootLogger.level="INFO" # Set the filters appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic) # ... Define external logging in the spec of the resource, setting the logging.valueFrom.configMapKeyRef.name to the name of the ConfigMap and logging.valueFrom.configMapKeyRef.key to the key in this ConfigMap. For the Topic Operator, logging is specified in the topicOperator configuration of the Kafka resource. spec: # ... entityOperator: topicOperator: logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j2.properties Apply the changes by deploying the Cluster Operator: create -f install/cluster-operator -n my-cluster-operator-namespace Additional resources Configuring Kafka Cluster Operator logging Topic Operator logging User Operator logging 8.14. Using ConfigMaps to add configuration Add specific configuration to your AMQ Streams deployment using ConfigMap resources. ConfigMaps use key-value pairs to store non-confidential data. Configuration data added to ConfigMaps is maintained in one place and can be reused amongst components. ConfigMaps can only store the following types of configuration data: Logging configuration Metrics configuration External configuration for Kafka Connect connectors You can't use ConfigMaps for other areas of configuration. When you configure a component, you can add a reference to a ConfigMap using the configMapKeyRef property. For example, you can use configMapKeyRef to reference a ConfigMap that provides configuration for logging. You might use a ConfigMap to pass a Log4j configuration file. You add the reference to the logging configuration. Example ConfigMap for logging spec: # ... logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key To use a ConfigMap for metrics configuration, you add a reference to the metricsConfig configuration of the component in the same way. ExternalConfiguration properties make data from a ConfigMap (or Secret) mounted to a pod available as environment variables or volumes. You can use external configuration data for the connectors used by Kafka Connect. The data might be related to an external data source, providing the values needed for the connector to communicate with that data source. For example, you can use the configMapKeyRef property to pass configuration data from a ConfigMap as an environment variable. Example ConfigMap providing environment variable values apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key If you are using ConfigMaps that are managed externally, use configuration providers to load the data in the ConfigMaps. 8.14.1. Naming custom ConfigMaps AMQ Streams creates its own ConfigMaps and other resources when it is deployed to OpenShift. The ConfigMaps contain data necessary for running components. The ConfigMaps created by AMQ Streams must not be edited. Make sure that any custom ConfigMaps you create do not have the same name as these default ConfigMaps. If they have the same name, they will be overwritten. For example, if your ConfigMap has the same name as the ConfigMap for the Kafka cluster, it will be overwritten when there is an update to the Kafka cluster. Additional resources List of Kafka cluster resources (including ConfigMaps) Logging configuration metricsConfig ExternalConfiguration schema reference Loading configuration values from external sources 8.15. Loading configuration values from external sources Use configuration providers to load configuration data from external sources. The providers operate independently of AMQ Streams. You can use them to load configuration data for all Kafka components, including producers and consumers. You reference the external source in the configuration of the component and provide access rights. The provider loads data without needing to restart the Kafka component or extracting files, even when referencing a new external source. For example, use providers to supply the credentials for the Kafka Connect connector configuration. The configuration must include any access rights to the external source. 8.15.1. Enabling configuration providers You can enable one or more configuration providers using the config.providers properties in the spec configuration of a component. Example configuration to enable a configuration provider apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env config.providers.env.class: io.strimzi.kafka.EnvVarConfigProvider # ... KubernetesSecretConfigProvider Loads configuration data from OpenShift secrets. You specify the name of the secret and the key within the secret where the configuration data is stored. This provider is useful for storing sensitive configuration data like passwords or other user credentials. KubernetesConfigMapConfigProvider Loads configuration data from OpenShift config maps. You specify the name of the config map and the key within the config map where the configuration data is stored. This provider is useful for storing non-sensitive configuration data. EnvVarConfigProvider Loads configuration data from environment variables. You specify the name of the environment variable where the configuration data is stored. This provider is useful for configuring applications running in containers, for example, to load certificates or JAAS configuration from environment variables mapped from secrets. FileConfigProvider Loads configuration data from a file. You specify the path to the file where the configuration data is stored. This provider is useful for loading configuration data from files that are mounted into containers. DirectoryConfigProvider Loads configuration data from files within a directory. You specify the path to the directory where the configuration files are stored. This provider is useful for loading multiple configuration files and for organizing configuration data into separate files. To use KubernetesSecretConfigProvider and KubernetesConfigMapConfigProvider , which are part of the OpenShift Configuration Provider plugin, you must set up access rights to the namespace that contains the configuration file. You can use the other providers without setting up access rights. You can supply connector configuration for Kafka Connect or MirrorMaker 2 in this way by doing the following: Mount config maps or secrets into the Kafka Connect pod as environment variables or volumes Enable EnvVarConfigProvider , FileConfigProvider , or DirectoryConfigProvider in the Kafka Connect or MirrorMaker 2 configuration Pass connector configuration using the externalConfiguration property in the spec of the KafkaConnect or KafkaMirrorMaker2 resource Using providers help prevent the passing of restricted information through the Kafka Connect REST interface. You can use this approach in the following scenarios: Mounting environment variables with the values a connector uses to connect and communicate with a data source Mounting a properties file with values that are used to configure Kafka Connect connectors Mounting files in a directory that contains values for the TLS truststore and keystore used by a connector Note A restart is required when using a new Secret or ConfigMap for a connector, which can disrupt other connectors. Additional resources ExternalConfiguration schema reference 8.15.2. Loading configuration values from secrets or config maps Use the KubernetesSecretConfigProvider to provide configuration properties from a secret or the KubernetesConfigMapConfigProvider to provide configuration properties from a config map. In this procedure, a config map provides configuration properties for a connector. The properties are specified as key values of the config map. The config map is mounted into the Kafka Connect pod as a volume. Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a config map containing the connector configuration. Example config map with connector properties apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2 Procedure Configure the KafkaConnect resource. Enable the KubernetesConfigMapConfigProvider The specification shown here can support loading values from config maps and secrets. Example Kafka Connect configuration to use config maps and secrets apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: secrets,configmaps 1 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 2 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 3 # ... 1 The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from config.providers , taking the form config.providers.USD{alias}.class . 2 KubernetesConfigMapConfigProvider provides values from config maps. 3 KubernetesSecretConfigProvider provides values from secrets. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Create a role that permits access to the values in the external config map. Example role to access values from a config map apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [""] resources: ["configmaps"] resourceNames: ["my-connector-configuration"] verbs: ["get"] # ... The rule gives the role permission to access the my-connector-configuration config map. Create a role binding to permit access to the namespace that contains the config map. Example role binding to access the namespace that contains the config map apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io # ... The role binding gives the role permission to access the my-project namespace. The service account must be the same one used by the Kafka Connect deployment. The service account name format is <cluster_name>-connect , where <cluster_name> is the name of the KafkaConnect custom resource. Reference the config map in the connector configuration. Example connector configuration referencing the config map apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: USD{configmaps:my-project/my-connector-configuration:option1} # ... # ... The placeholder structure is configmaps:<path_and_file_name>:<property> . KubernetesConfigMapConfigProvider reads and extracts the option1 property value from the external config map. 8.15.3. Loading configuration values from environment variables Use the EnvVarConfigProvider to provide configuration properties as environment variables. Environment variables can contain values from config maps or secrets. In this procedure, environment variables provide configuration properties for a connector to communicate with Amazon AWS. The connector must be able to read the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY . The values of the environment variables are derived from a secret mounted into the Kafka Connect pod. Note The names of user-defined environment variables cannot start with KAFKA_ or STRIMZI_ . Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a secret containing the connector configuration. Example secret with values for environment variables apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE= Procedure Configure the KafkaConnect resource. Enable the EnvVarConfigProvider Specify the environment variables using the externalConfiguration property. Example Kafka Connect configuration to use external environment variables apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: "true" spec: # ... config: # ... config.providers: env 1 config.providers.env.class: io.strimzi.kafka.EnvVarConfigProvider 2 # ... externalConfiguration: env: - name: AWS_ACCESS_KEY_ID 3 valueFrom: secretKeyRef: name: aws-creds 4 key: awsAccessKey 5 - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey # ... 1 The alias for the configuration provider is used to define other configuration parameters. The provider parameters use the alias from config.providers , taking the form config.providers.USD{alias}.class . 2 EnvVarConfigProvider provides values from environment variables. 3 The environment variable takes a value from the secret. 4 The name of the secret containing the environment variable. 5 The name of the key stored in the secret. Note The secretKeyRef property references keys in a secret. If you are using a config map instead of a secret, use the configMapKeyRef property. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Reference the environment variable in the connector configuration. Example connector configuration referencing the environment variable apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # ... config: option: USD{env:AWS_ACCESS_KEY_ID} option: USD{env:AWS_SECRET_ACCESS_KEY} # ... # ... The placeholder structure is env:<environment_variable_name> . EnvVarConfigProvider reads and extracts the environment variable values from the mounted secret. 8.15.4. Loading configuration values from a file within a directory Use the FileConfigProvider to provide configuration properties from a file within a directory. Files can be config maps or secrets. In this procedure, a file provides configuration properties for a connector. A database name and password are specified as properties of a secret. The secret is mounted to the Kafka Connect pod as a volume. Volumes are mounted on the path /opt/kafka/external-configuration/<volume-name> . Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a secret containing the connector configuration. Example secret with database properties apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password 1 The connector configuration in properties file format. 2 Database username and password properties used in the configuration. Procedure Configure the KafkaConnect resource. Enable the FileConfigProvider Specify the file using the externalConfiguration property. Example Kafka Connect configuration to use an external property file apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 #... externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4 1 The alias for the configuration provider is used to define other configuration parameters. 2 FileConfigProvider provides values from properties files. The parameter uses the alias from config.providers , taking the form config.providers.USD{alias}.class . 3 The name of the volume containing the secret. 4 The name of the secret. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Reference the file properties in the connector configuration as placeholders. Example connector configuration referencing the file apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: "3306" database.user: "USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbUsername}" database.password: "USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbPassword}" database.server.id: "184054" #... The placeholder structure is file:<path_and_file_name>:<property> . FileConfigProvider reads and extracts the database username and password property values from the mounted secret. 8.15.5. Loading configuration values from multiple files within a directory Use the DirectoryConfigProvider to provide configuration properties from multiple files within a directory. Files can be config maps or secrets. In this procedure, a secret provides the TLS keystore and truststore user credentials for a connector. The credentials are in separate files. The secrets are mounted into the Kafka Connect pod as volumes. Volumes are mounted on the path /opt/kafka/external-configuration/<volume-name> . Prerequisites A Kafka cluster is running. The Cluster Operator is running. You have a secret containing the user credentials. Example secret with user credentials apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store The my-user secret provides the keystore credentials ( user.crt and user.key ) for the connector. The <cluster_name>-cluster-ca-cert secret generated when deploying the Kafka cluster provides the cluster CA certificate as truststore credentials ( ca.crt ). Procedure Configure the KafkaConnect resource. Enable the DirectoryConfigProvider Specify the files using the externalConfiguration property. Example Kafka Connect configuration to use external property files apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: config.providers: directory 1 config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 2 #... externalConfiguration: volumes: 3 - name: cluster-ca 4 secret: secretName: my-cluster-cluster-ca-cert 5 - name: my-user secret: secretName: my-user 6 1 The alias for the configuration provider is used to define other configuration parameters. 2 DirectoryConfigProvider provides values from files in a directory. The parameter uses the alias from config.providers , taking the form config.providers.USD{alias}.class . 3 The names of the volumes containing the secrets. 4 The name of the secret for the cluster CA certificate to supply truststore configuration. 5 The name of the secret for the user to supply keystore configuration. Create or update the resource to enable the provider. oc apply -f <kafka_connect_configuration_file> Reference the file properties in the connector configuration as placeholders. Example connector configuration referencing the files apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: # ... database.history.producer.security.protocol: SSL database.history.producer.ssl.truststore.type: PEM database.history.producer.ssl.truststore.certificates: "USD{directory:/opt/kafka/external-configuration/cluster-ca:ca.crt}" database.history.producer.ssl.keystore.type: PEM database.history.producer.ssl.keystore.certificate.chain: "USD{directory:/opt/kafka/external-configuration/my-user:user.crt}" database.history.producer.ssl.keystore.key: "USD{directory:/opt/kafka/external-configuration/my-user:user.key}" #... The placeholder structure is directory:<path>:<file_name> . DirectoryConfigProvider reads and extracts the credentials from the mounted secrets. 8.16. Customizing OpenShift resources An AMQ Streams deployment creates OpenShift resources, such as Deployment , Pod , and Service resources. These resources are managed by AMQ Streams operators. Only the operator that is responsible for managing a particular OpenShift resource can change that resource. If you try to manually change an operator-managed OpenShift resource, the operator will revert your changes back. Changing an operator-managed OpenShift resource can be useful if you want to perform certain tasks, such as the following: Adding custom labels or annotations that control how Pods are treated by Istio or other services Managing how Loadbalancer -type Services are created by the cluster To make the changes to an OpenShift resource, you can use the template property within the spec section of various AMQ Streams custom resources. Here is a list of the custom resources where you can apply the changes: Kafka.spec.kafka Kafka.spec.zookeeper Kafka.spec.entityOperator Kafka.spec.kafkaExporter Kafka.spec.cruiseControl KafkaNodePool.spec KafkaConnect.spec KafkaMirrorMaker.spec KafkaMirrorMaker2.spec KafkaBridge.spec KafkaUser.spec For more information about these properties, see the AMQ Streams Custom Resource API Reference . The AMQ Streams Custom Resource API Reference provides more details about the customizable fields. In the following example, the template property is used to modify the labels in a Kafka broker's pod. Example template customization apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # ... template: pod: metadata: labels: mylabel: myvalue # ... 8.16.1. Customizing the image pull policy AMQ Streams allows you to customize the image pull policy for containers in all pods deployed by the Cluster Operator. The image pull policy is configured using the environment variable STRIMZI_IMAGE_PULL_POLICY in the Cluster Operator deployment. The STRIMZI_IMAGE_PULL_POLICY environment variable can be set to three different values: Always Container images are pulled from the registry every time the pod is started or restarted. IfNotPresent Container images are pulled from the registry only when they were not pulled before. Never Container images are never pulled from the registry. Currently, the image pull policy can only be customized for all Kafka, Kafka Connect, and Kafka MirrorMaker clusters at once. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. Additional resources Disruptions . 8.16.2. Applying a termination grace period Apply a termination grace period to give a Kafka cluster enough time to shut down cleanly. Specify the time using the terminationGracePeriodSeconds property. Add the property to the template.pod configuration of the Kafka custom resource. The time you add will depend on the size of your Kafka cluster. The OpenShift default for the termination grace period is 30 seconds. If you observe that your clusters are not shutting down cleanly, you can increase the termination grace period. A termination grace period is applied every time a pod is restarted. The period begins when OpenShift sends a term (termination) signal to the processes running in the pod. The period should reflect the amount of time required to transfer the processes of the terminating pod to another pod before they are stopped. After the period ends, a kill signal stops any processes still running in the pod. The following example adds a termination grace period of 120 seconds to the Kafka custom resource. You can also specify the configuration in the custom resources of other Kafka components. Example termination grace period configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... template: pod: terminationGracePeriodSeconds: 120 # ... # ... | [
"apply -f <kafka_configuration_file>",
"examples ├── user 1 ├── topic 2 ├── security 3 │ ├── tls-auth │ ├── scram-sha-512-auth │ └── keycloak-authorization ├── mirror-maker 4 ├── metrics 5 ├── kafka 6 │ └── nodepools 7 ├── cruise-control 8 ├── connect 9 └── bridge 10",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 1 version: 3.5.0 2 logging: 3 type: inline loggers: kafka.root.logger.level: INFO resources: 4 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\" readinessProbe: 5 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 6 -Xms: 8192m -Xmx: 8192m image: my-org/my-image:latest 7 listeners: 8 - name: plain 9 port: 9092 10 type: internal 11 tls: false 12 configuration: useServiceDnsDomain: true 13 - name: tls port: 9093 type: internal tls: true authentication: 14 type: tls - name: external 15 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 16 secretName: my-secret certificate: my-certificate.crt key: my-key.key authorization: 17 type: simple config: 18 auto.create.topics.enable: \"false\" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 default.replication.factor: 3 min.insync.replicas: 2 inter.broker.protocol.version: \"3.5\" storage: 19 type: persistent-claim 20 size: 10000Gi rack: 21 topologyKey: topology.kubernetes.io/zone metricsConfig: 22 type: jmxPrometheusExporter valueFrom: configMapKeyRef: 23 name: my-config-map key: my-key # zookeeper: 24 replicas: 3 25 logging: 26 type: inline loggers: zookeeper.root.logger: INFO resources: requests: memory: 8Gi cpu: \"2\" limits: memory: 8Gi cpu: \"2\" jvmOptions: -Xms: 4096m -Xmx: 4096m storage: type: persistent-claim size: 1000Gi metricsConfig: # entityOperator: 27 tlsSidecar: 28 resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 29 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: 30 type: inline loggers: rootLogger.level: INFO resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" kafkaExporter: 31 # cruiseControl: 32 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback 1 client.quota.callback.static.produce: 1000000 2 client.quota.callback.static.fetch: 1000000 3 client.quota.callback.static.storage.soft: 400000000000 4 client.quota.callback.static.storage.hard: 500000000000 5 client.quota.callback.static.storage.check-interval: 5 6",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: pool-a 1 labels: strimzi.io/cluster: my-cluster 2 spec: replicas: 3 3 roles: - broker 4 storage: 5 type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false resources: 6 requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kraft-dual-role labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: 1 - controller - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 20Gi deleteClaim: false resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\"",
"annotate kafkanodepool pool-a strimzi.io/next-node-ids=\"[0,1,2,10-20,30]\"",
"annotate kafkanodepool pool-b strimzi.io/remove-node-ids=\"[60-50,9,8,7]\"",
"NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-2 1/1 Running 0",
"scale kafkanodepool pool-a --replicas=4",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-2 1/1 Running 0 my-cluster-pool-a-kafka-3 1/1 Running 0",
"NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-2 1/1 Running 0 my-cluster-pool-a-kafka-3 1/1 Running 0",
"scale kafkanodepool pool-a --replicas=3",
"NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-0 1/1 Running 0 my-cluster-pool-b-kafka-1 1/1 Running 0 my-cluster-pool-b-kafka-2 1/1 Running 0",
"scale kafkanodepool pool-a --replicas=4",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-pool-a-kafka-0 1/1 Running 0 my-cluster-pool-a-kafka-1 1/1 Running 0 my-cluster-pool-a-kafka-4 1/1 Running 0 my-cluster-pool-a-kafka-5 1/1 Running 0",
"scale kafkanodepool pool-b --replicas=3",
"NAME READY STATUS RESTARTS my-cluster-pool-b-kafka-2 1/1 Running 0 my-cluster-pool-b-kafka-3 1/1 Running 0 my-cluster-pool-b-kafka-6 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaNodePool metadata: name: kafka labels: strimzi.io/cluster: my-cluster spec: replicas: 3 roles: - broker storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false",
"apply -f <node_pool_configuration_file>",
"env: - name: STRIMZI_FEATURE_GATES value: +KafkaNodePools",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster annotations: strimzi.io/node-pools: enabled spec: kafka: version: 3.5.0 replicas: 3 # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {}",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 resources: requests: cpu: \"1\" memory: 500Mi limits: cpu: \"1\" memory: 500Mi #",
"env: - name: STRIMZI_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"env: - name: STRIMZI_OPERATOR_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"env: - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2",
"env: - name: STRIMZI_LABELS_EXCLUSION_PATTERN value: \"^key1.*\"",
"env: - name: STRIMZI_CUSTOM_RESOURCE_SELECTOR value: label1=value1,label2=value2",
"env: - name: STRIMZI_KUBERNETES_VERSION value: | major=1 minor=16 gitVersion=v1.16.2 gitCommit=c97fe5036ef3df2967d086711e6c0c405941e14b gitTreeState=clean buildDate=2019-10-15T19:09:08Z goVersion=go1.12.10 compiler=gc platform=linux/amd64",
"<cluster-name> -kafka-0. <cluster-name> -kafka-brokers. <namespace> .svc. cluster.local",
"# env: # - name: STRIMZI_OPERATOR_NAMESPACE_LABELS value: label1=value1,label2=value2 #",
"# env: # - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS value: \"120000\" #",
"env: - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace",
"env: - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-cluster-operator labels: app: strimzi spec: replicas: 3",
"spec containers: - name: strimzi-cluster-operator # env: - name: STRIMZI_LEADER_ELECTION_ENABLED value: \"true\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAME value: \"my-strimzi-cluster-operator\" - name: STRIMZI_LEADER_ELECTION_LEASE_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_LEADER_ELECTION_IDENTITY valueFrom: fieldRef: fieldPath: metadata.name",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi rules: - apiGroups: - coordination.k8s.io resourceNames: - my-strimzi-cluster-operator",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: strimzi-cluster-operator-leader-election labels: app: strimzi subjects: - kind: ServiceAccount name: my-strimzi-cluster-operator namespace: myproject",
"create -f install/cluster-operator -n myproject",
"get deployments -n myproject",
"NAME READY UP-TO-DATE AVAILABLE strimzi-cluster-operator 3/3 3 3",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"HTTP_PROXY\" value: \"http://proxy.com\" 1 - name: \"HTTPS_PROXY\" value: \"https://proxy.com\" 2 - name: \"NO_PROXY\" value: \"internal.com, other.domain.com\" 3 #",
"edit deployment strimzi-cluster-operator",
"create -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: # env: # - name: \"FIPS_MODE\" value: \"disabled\" 1 #",
"edit deployment strimzi-cluster-operator",
"apply -f install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 2 spec: replicas: 3 3 authentication: 4 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source bootstrapServers: my-cluster-kafka-bootstrap:9092 5 tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt config: 7 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 build: 8 output: 9 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 10 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/2.1.3.Final/debezium-connector-postgres-2.1.3.Final-plugin.tar.gz sha512sum: c4ddc97846de561755dc0b021a62aba656098829c70eb3ade3b817ce06d852ca12ae50c0281cc791a5a131cb7fc21fb15f4b8ee76c6cae5dd07f9c11cb7c6e79 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.11.5/camel-telegram-kafka-connector-0.11.5-package.tar.gz sha512sum: d6d9f45e0d1dbfcc9f6d1c7ca2046168c764389c78bc4b867dab32d24f710bb74ccf2a007d7d7a8af2dfca09d9a52ccbc2831fc715c195a3634cca055185bd91 externalConfiguration: 11 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey resources: 12 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 13 type: inline loggers: log4j.rootLogger: INFO readinessProbe: 14 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 15 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 16 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 17 rack: topologyKey: topology.kubernetes.io/zone 18 template: 19 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 20 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 21",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operations: - Create - Describe - Read - Write host: \"*\" # consumer group - resource: type: group name: connect-cluster patternType: literal operations: - Read host: \"*\"",
"apply -f KAFKA-USER-CONFIG-FILE",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.5.0 connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: {}",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.5.0 1 replicas: 3 2 connectCluster: \"my-cluster-target\" 3 clusters: 4 - alias: \"my-cluster-source\" 5 authentication: 6 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 7 tls: 8 trustedCertificates: - certificate: ca.crt secretName: my-cluster-source-cluster-ca-cert - alias: \"my-cluster-target\" 9 authentication: 10 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 11 config: 12 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 tls: 13 trustedCertificates: - certificate: ca.crt secretName: my-cluster-target-cluster-ca-cert mirrors: 14 - sourceCluster: \"my-cluster-source\" 15 targetCluster: \"my-cluster-target\" 16 sourceConnector: 17 tasksMax: 10 18 autoRestart: 19 enabled: true config: replication.factor: 1 20 offset-syncs.topic.replication.factor: 1 21 sync.topic.acls.enabled: \"false\" 22 refresh.topics.interval.seconds: 60 23 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" 24 heartbeatConnector: 25 autoRestart: enabled: true config: heartbeats.topic.replication.factor: 1 26 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" checkpointConnector: 27 autoRestart: enabled: true config: checkpoints.topic.replication.factor: 1 28 refresh.groups.interval.seconds: 600 29 sync.group.offsets.enabled: true 30 sync.group.offsets.interval.seconds: 60 31 emit.checkpoints.interval.seconds: 60 32 replication.policy.class: \"org.apache.kafka.connect.mirror.IdentityReplicationPolicy\" topicsPattern: \"topic1|topic2|topic3\" 33 groupsPattern: \"group1|group2|group3\" 34 resources: 35 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 36 type: inline loggers: connect.root.logger.level: INFO readinessProbe: 37 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 38 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 39 rack: topologyKey: topology.kubernetes.io/zone 40 template: 41 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 42 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 43 externalConfiguration: 44 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 3.5.0 # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 5 config: producer.override.batch.size: 327680 producer.override.linger.ms: 100 producer.request.timeout.ms: 30000 consumer.fetch.max.bytes: 52428800 # checkpointConnector: config: producer.override.request.timeout.ms: 30000 consumer.max.poll.interval.ms: 300000 # heartbeatConnector: config: producer.override.request.timeout.ms: 30000 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: tasksMax: 10 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: # mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" checkpointConnector: tasksMax: 10 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-source-cluster spec: kafka: version: 3.5.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.5\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-target-cluster spec: kafka: version: 3.5.0 replicas: 1 listeners: - name: tls port: 9093 type: internal tls: true authentication: type: tls authorization: type: simple config: offsets.topic.replication.factor: 1 transaction.state.log.replication.factor: 1 transaction.state.log.min.isr: 1 default.replication.factor: 1 min.insync.replicas: 1 inter.broker.protocol.version: \"3.5\" storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false zookeeper: replicas: 1 storage: type: persistent-claim size: 100Gi deleteClaim: false entityOperator: topicOperator: {} userOperator: {}",
"apply -f <kafka_configuration_file> -n <namespace>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-source-user labels: strimzi.io/cluster: my-source-cluster spec: authentication: type: tls authorization: type: simple acls: # MirrorSourceConnector - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Create - DescribeConfigs - Read - Write - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - DescribeConfigs - Read # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: # Needed for every group for which offsets are synced type: group name: \"*\" operations: - Describe - resource: # Not needed if offset-syncs.topic.location=target type: topic name: mm2-offset-syncs.my-target-cluster.internal operations: - Read",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-target-user labels: strimzi.io/cluster: my-target-cluster spec: authentication: type: tls authorization: type: simple acls: # Underlying Kafka Connect internal topics to store configuration, offsets, or status - resource: type: group name: mirrormaker2-cluster operations: - Read - resource: type: topic name: mirrormaker2-cluster-configs operations: - Create - Describe - DescribeConfigs - Read - Write - resource: type: topic name: mirrormaker2-cluster-status operations: - Create - Describe - DescribeConfigs - Read - Write - resource: type: topic name: mirrormaker2-cluster-offsets operations: - Create - Describe - DescribeConfigs - Read - Write # MirrorSourceConnector - resource: # Needed for every topic which is mirrored type: topic name: \"*\" operations: - Create - Alter - AlterConfigs - Write # MirrorCheckpointConnector - resource: type: cluster operations: - Describe - resource: type: topic name: my-source-cluster.checkpoints.internal operations: - Create - Describe - Read - Write - resource: # Needed for every group for which the offset is synced type: group name: \"*\" operations: - Read - Describe # MirrorHeartbeatConnector - resource: type: topic name: heartbeats operations: - Create - Describe - Write",
"apply -f <kafka_user_configuration_file> -n <namespace>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker-2 spec: version: 3.5.0 replicas: 1 connectCluster: \"my-target-cluster\" clusters: - alias: \"my-source-cluster\" bootstrapServers: my-source-cluster-kafka-bootstrap:9093 tls: 1 trustedCertificates: - secretName: my-source-cluster-cluster-ca-cert certificate: ca.crt authentication: 2 type: tls certificateAndKey: secretName: my-source-user certificate: user.crt key: user.key - alias: \"my-target-cluster\" bootstrapServers: my-target-cluster-kafka-bootstrap:9093 tls: 3 trustedCertificates: - secretName: my-target-cluster-cluster-ca-cert certificate: ca.crt authentication: 4 type: tls certificateAndKey: secretName: my-target-user certificate: user.crt key: user.key config: # -1 means it will use the default replication factor configured in the broker config.storage.replication.factor: -1 offset.storage.replication.factor: -1 status.storage.replication.factor: -1 mirrors: - sourceCluster: \"my-source-cluster\" targetCluster: \"my-target-cluster\" sourceConnector: config: replication.factor: 1 offset-syncs.topic.replication.factor: 1 sync.topic.acls.enabled: \"false\" heartbeatConnector: config: heartbeats.topic.replication.factor: 1 checkpointConnector: config: checkpoints.topic.replication.factor: 1 sync.group.offsets.enabled: \"true\" topicsPattern: \"topic1|topic2|topic3\" groupsPattern: \"group1|group2|group3\"",
"apply -f <mirrormaker2_configuration_file> -n <namespace_of_target_cluster>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: \"my-group\" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert certificate: ca.crt authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 9 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert certificate: ca.crt authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 include: \"my-topic|other-topic\" 10 resources: 11 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 12 type: inline loggers: mirrormaker.root.logger: INFO readinessProbe: 13 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metricsConfig: 14 type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key jvmOptions: 15 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 16 template: 17 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" mirrorMakerContainer: 18 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: 19 type: opentelemetry",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: replicas: 3 1 bootstrapServers: <cluster_name> -cluster-kafka-bootstrap:9092 2 tls: 3 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt authentication: 4 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key http: 5 port: 8080 cors: 6 allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" consumer: 7 config: auto.offset.reset: earliest producer: 8 config: delivery.timeout.ms: 300000 resources: 9 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 10 type: inline loggers: logger.bridge.level: INFO # enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: DEBUG jvmOptions: 11 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 image: my-org/my-image:latest 13 template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" bridgeContainer: 15 env: - name: OTEL_SERVICE_NAME value: my-otel-service - name: OTEL_EXPORTER_OTLP_ENDPOINT value: \"http://otlp-host:4317\" tracing: type: opentelemetry 16",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: ephemeral # zookeeper: # storage: type: ephemeral #",
"/var/lib/kafka/data/kafka-log IDX",
"spec: kafka: # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: storage: type: persistent-claim size: 1000Gi",
"storage: type: persistent-claim size: 1Gi class: my-storage-class",
"storage: type: persistent-claim size: 1Gi selector: hdd-type: ssd deleteClaim: true",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # kafka: replicas: 3 storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # # zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c #",
"/var/lib/kafka/data/kafka-log IDX",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: persistent-claim size: 2000Gi class: my-storage-class # zookeeper: #",
"apply -f <kafka_configuration_file>",
"get pv",
"NAME CAPACITY CLAIM pvc-0ca459ce-... 2000Gi my-project/data-my-cluster-kafka-2 pvc-6e1810be-... 2000Gi my-project/data-my-cluster-kafka-0 pvc-82dc78c9-... 2000Gi my-project/data-my-cluster-kafka-1",
"storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false",
"/var/lib/kafka/data- id /kafka-log idx",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -kafka topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/name operator: In values: - CLUSTER-NAME -zookeeper topologyKey: \"kubernetes.io/hostname\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" # zookeeper: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: strimzi.io/cluster operator: In values: - CLUSTER-NAME topologyKey: \"kubernetes.io/hostname\" #",
"apply -f <kafka_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" # zookeeper: #",
"apply -f <kafka_configuration_file>",
"label node NAME-OF-NODE node-type=fast-network",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # zookeeper: #",
"apply -f <kafka_configuration_file>",
"adm taint node NAME-OF-NODE dedicated=Kafka:NoSchedule",
"label node NAME-OF-NODE dedicated=Kafka",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # template: pod: tolerations: - key: \"dedicated\" operator: \"Equal\" value: \"Kafka\" effect: \"NoSchedule\" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # zookeeper: #",
"apply -f <kafka_configuration_file>",
"spec: # logging: type: inline loggers: kafka.root.logger.level: INFO",
"spec: # logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key",
"kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level=\"INFO\"",
"create configmap logging-configmap --from-file=log4j.properties",
"Define the logger kafka.root.logger.level=\"INFO\"",
"spec: # logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j.properties",
"apply -f <kafka_configuration_file>",
"create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml",
"edit configmap strimzi-cluster-operator",
"rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter 1 appender.console.filter.filter1.onMatch=ACCEPT 2 appender.console.filter.filter1.onMismatch=DENY 3 appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster) 4",
"appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster-1) appender.console.filter.filter2.type=MarkerFilter appender.console.filter.filter2.onMatch=ACCEPT appender.console.filter.filter2.onMismatch=DENY appender.console.filter.filter2.marker=Kafka(my-namespace/my-kafka-cluster-2)",
"kind: ConfigMap apiVersion: v1 metadata: name: strimzi-cluster-operator data: log4j2.properties: # appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=Kafka(my-namespace/my-kafka-cluster)",
"edit configmap strimzi-cluster-operator",
"create -f install/cluster-operator/050-ConfigMap-strimzi-cluster-operator.yaml",
"kind: ConfigMap apiVersion: v1 metadata: name: logging-configmap data: log4j2.properties: rootLogger.level=\"INFO\" appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)",
"create configmap logging-configmap --from-file=log4j2.properties",
"Define the logger rootLogger.level=\"INFO\" Set the filters appender.console.filter.filter1.type=MarkerFilter appender.console.filter.filter1.onMatch=ACCEPT appender.console.filter.filter1.onMismatch=DENY appender.console.filter.filter1.marker=KafkaTopic(my-namespace/my-topic)",
"spec: # entityOperator: topicOperator: logging: type: external valueFrom: configMapKeyRef: name: logging-configmap key: log4j2.properties",
"create -f install/cluster-operator -n my-cluster-operator-namespace",
"spec: # logging: type: external valueFrom: configMapKeyRef: name: my-config-map key: my-config-map-key",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # externalConfiguration: env: - name: MY_ENVIRONMENT_VARIABLE valueFrom: configMapKeyRef: name: my-config-map key: my-key",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env config.providers.env.class: io.strimzi.kafka.EnvVarConfigProvider #",
"apiVersion: v1 kind: ConfigMap metadata: name: my-connector-configuration data: option1: value1 option2: value2",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: secrets,configmaps 1 config.providers.configmaps.class: io.strimzi.kafka.KubernetesConfigMapConfigProvider 2 config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider 3 #",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: connector-configuration-role rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"my-connector-configuration\"] verbs: [\"get\"]",
"apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: connector-configuration-role-binding subjects: - kind: ServiceAccount name: my-connect-connect namespace: my-project roleRef: kind: Role name: connector-configuration-role apiGroup: rbac.authorization.k8s.io",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{configmaps:my-project/my-connector-configuration:option1} #",
"apiVersion: v1 kind: Secret metadata: name: aws-creds type: Opaque data: awsAccessKey: QUtJQVhYWFhYWFhYWFhYWFg= awsSecretAccessKey: Ylhsd1lYTnpkMjl5WkE=",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect annotations: strimzi.io/use-connector-resources: \"true\" spec: # config: # config.providers: env 1 config.providers.env.class: io.strimzi.kafka.EnvVarConfigProvider 2 # externalConfiguration: env: - name: AWS_ACCESS_KEY_ID 3 valueFrom: secretKeyRef: name: aws-creds 4 key: awsAccessKey 5 - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey #",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-connector labels: strimzi.io/cluster: my-connect spec: # config: option: USD{env:AWS_ACCESS_KEY_ID} option: USD{env:AWS_SECRET_ACCESS_KEY} #",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: connector.properties: |- 1 dbUsername: my-username 2 dbPassword: my-password",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: file 1 config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider 2 # externalConfiguration: volumes: - name: connector-config 3 secret: secretName: mysecret 4",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: database.hostname: 192.168.99.1 database.port: \"3306\" database.user: \"USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbUsername}\" database.password: \"USD{file:/opt/kafka/external-configuration/connector-config/mysecret:dbPassword}\" database.server.id: \"184054\" #",
"apiVersion: v1 kind: Secret metadata: name: my-user labels: strimzi.io/kind: KafkaUser strimzi.io/cluster: my-cluster type: Opaque data: ca.crt: <public_key> # Public key of the clients CA user.crt: <user_certificate> # Public key of the user user.key: <user_private_key> # Private key of the user user.p12: <store> # PKCS #12 store for user certificates and keys user.password: <password_for_store> # Protects the PKCS #12 store",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: config.providers: directory 1 config.providers.directory.class: org.apache.kafka.common.config.provider.DirectoryConfigProvider 2 # externalConfiguration: volumes: 3 - name: cluster-ca 4 secret: secretName: my-cluster-cluster-ca-cert 5 - name: my-user secret: secretName: my-user 6",
"apply -f <kafka_connect_configuration_file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector labels: strimzi.io/cluster: my-connect-cluster spec: class: io.debezium.connector.mysql.MySqlConnector tasksMax: 2 config: # database.history.producer.security.protocol: SSL database.history.producer.ssl.truststore.type: PEM database.history.producer.ssl.truststore.certificates: \"USD{directory:/opt/kafka/external-configuration/cluster-ca:ca.crt}\" database.history.producer.ssl.keystore.type: PEM database.history.producer.ssl.keystore.certificate.chain: \"USD{directory:/opt/kafka/external-configuration/my-user:user.crt}\" database.history.producer.ssl.keystore.key: \"USD{directory:/opt/kafka/external-configuration/my-user:user.key}\" #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # template: pod: metadata: labels: mylabel: myvalue #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # template: pod: terminationGracePeriodSeconds: 120 # #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/overview-str |
Chapter 4. Accessing Red Hat Virtualization | Chapter 4. Accessing Red Hat Virtualization Red Hat Virtualization exposes a number of interfaces for interacting with the components of the virtualization environment. Many of these interfaces are fully supported. Some, however, are supported only for read access or only when your use of them has been explicitly requested by Red Hat Support. 4.1. Supported Interfaces for Read and Write Access Direct interaction with these interfaces is supported and encouraged for both read and write access: Administration Portal The Administration Portal is a graphical user interface provided by the Red Hat Virtualization Manager. It can be used to manage all the administrative resources in the environment and can be accessed by any supported web browsers. See: Administration Guide VM Portal The VM Portal is a graphical user interface provided by the Red Hat Virtualization Manager. It has limited permissions for managing virtual machine resources and is targeted at end users. See: Introduction to the VM Portal Cockpit In Red Hat Virtualization, the Cockpit web interface can be used to perform administrative tasks on a host. It is available by default on Red Hat Virtualization Hosts, and can be installed on Red Hat Enterprise Linux hosts. REST API The Red Hat Virtualization REST API provides a software interface for querying and modifying the Red Hat Virtualization environment. The REST API can be used by any programming language that supports HTTP actions. See: REST API Guide Software Development Kit (SDK) The Python and Java are fully supported interfaces for interacting with the Red Hat Virtualization Manager. See: Python SDK Guide Java SDK Guide Ansible Ansible provides modules to automate post-installation tasks on Red Hat Virtualization. See: Automating Configuration Tasks using Ansible in the Administration Guide . Self-Hosted Engine Command Line Utility The hosted-engine command is used to perform administrative tasks on the Manager virtual machine in self-hosted engine environments. See: Administering the Manager Virtual Machine in the Administration Guide . VDSM Hooks VDSM hooks trigger modifications to virtual machines, based on custom properties specified in the Administration Portal. See: VDSM and Hooks in the Administration Guide . 4.2. Supported Interfaces for Read Access Direct interaction with these interfaces is supported and encouraged only for read access. Use of these interfaces for write access is not supported unless explicitly requested by Red Hat Support. Red Hat Virtualization Manager History Database Read access to the Red Hat Virtualization Manager history ( ovirt_engine_history ) database using the database views specified in the Data Warehouse Guide is supported. Write access is not supported. Libvirt on Hosts Read access to libvirt using the virsh -r command is a supported method of interacting with virtualization hosts. Write access is not supported. 4.3. Unsupported Interfaces Direct interaction with these interfaces is not supported unless your use of them is explicitly requested by Red Hat Support : The vdsm-client Command Use of the vdsm-client command to interact with virtualization hosts is not supported unless explicitly requested by Red Hat Support. Red Hat Virtualization Manager Database Direct access to, and manipulation of, the Red Hat Virtualization Manager ( engine ) database is not supported unless explicitly requested by Red Hat Support. Important Red Hat Support will not debug user-created scripts or hooks except where it can be demonstrated that there is an issue with the interface being used rather than the user-created script itself. For more general information about Red Hat's support policies see Production Support Scope of Coverage . | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/product_guide/accessing-rhv |
Appendix A. Configuration reference | Appendix A. Configuration reference As a storage administrator, you can set various options for the Ceph Object Gateway. These options contain default values. If you do not specify each option, then the default value is set automatically. To set specific values for these options, update the configuration database by using the ceph config set client.rgw OPTION VALUE command. A.1. General settings Name Description Type Default rgw_data Sets the location of the data files for Ceph Object Gateway. String /var/lib/ceph/radosgw/USDcluster-USDid rgw_enable_apis Enables the specified APIs. String s3, s3website, swift, swift_auth, admin, sts, iam, notifications rgw_cache_enabled Whether the Ceph Object Gateway cache is enabled. Boolean true rgw_cache_lru_size The number of entries in the Ceph Object Gateway cache. Integer 10000 rgw_socket_path The socket path for the domain socket. FastCgiExternalServer uses this socket. If you do not specify a socket path, Ceph Object Gateway will not run as an external server. The path you specify here must be the same as the path specified in the rgw.conf file. String N/A rgw_host The host for the Ceph Object Gateway instance. Can be an IP address or a hostname. String 0.0.0.0 rgw_port Port the instance listens for requests. If not specified, Ceph Object Gateway runs external FastCGI. String None rgw_dns_name The DNS name of the served domain. See also the hostnames setting within zone groups. String None rgw_script_uri The alternative value for the SCRIPT_URI if not set in the request. String None rgw_request_uri The alternative value for the REQUEST_URI if not set in the request. String None rgw_print_continue Enable 100-continue if it is operational. Boolean true rgw_remote_addr_param The remote address parameter. For example, the HTTP field containing the remote address, or the X-Forwarded-For address if a reverse proxy is operational. String REMOTE_ADDR rgw_op_thread_timeout The timeout in seconds for open threads. Integer 600 rgw_op_thread_suicide_timeout The timeout in seconds before a Ceph Object Gateway process dies. Disabled if set to 0 . Integer 0 rgw_thread_pool_size The size of the thread pool. Integer 512 threads. rgw_num_control_oids The number of notification objects used for cache synchronization between different rgw instances. Integer 8 rgw_init_timeout The number of seconds before Ceph Object Gateway gives up on initialization. Integer 30 rgw_mime_types_file The path and location of the MIME types. Used for Swift auto-detection of object types. String /etc/mime.types rgw_gc_max_objs The maximum number of objects that may be handled by garbage collection in one garbage collection processing cycle. Integer 32 rgw_gc_obj_min_wait The minimum wait time before the object may be removed and handled by garbage collection processing. Integer 2 * 3600 rgw_gc_processor_max_time The maximum time between the beginning of two consecutive garbage collection processing cycles. Integer 3600 rgw_gc_processor_period The cycle time for garbage collection processing. Integer 3600 rgw_s3 success_create_obj_status The alternate success status response for create-obj . Integer 0 rgw_resolve_cname Whether rgw should use the DNS CNAME record of the request hostname field (if hostname is not equal to rgw_dns name ). Boolean false rgw_object_stripe_size The size of an object stripe for Ceph Object Gateway objects. Integer 4 << 20 rgw_extended_http_attrs Add a new set of attributes that could be set on an object. These extra attributes can be set through HTTP header fields when putting the objects. If set, these attributes will return as HTTP fields when doing GET/HEAD on the object. String None. For example: "content_foo, content_bar" rgw_exit_timeout_secs Number of seconds to wait for a process before exiting unconditionally. Integer 120 rgw_get_obj_window_size The window size in bytes for a single object request. Integer 16 << 20 rgw_get_obj_max_req_size The maximum request size of a single get operation sent to the Ceph Storage Cluster. Integer 4 << 20 rgw_relaxed_s3_bucket_names Enables relaxed S3 bucket names rules for zone group buckets. Boolean false rgw_list buckets_max_chunk The maximum number of buckets to retrieve in a single operation when listing user buckets. Integer 1000 rgw_override_bucket_index_max_shards The number of shards for the bucket index object. A value of 0 indicates there is no sharding. Red Hat does not recommend setting a value too large (for example, 1000 ) as it increases the cost for bucket listing. This variable should be set in the [client] or the [global] section so it is automatically applied to radosgw-admin commands. Integer 0 rgw_curl_wait_timeout_ms The timeout in milliseconds for certain curl calls. Integer 1000 rgw_copy_obj_progress Enables output of object progress during long copy operations. Boolean true rgw_copy_obj_progress_every_bytes The minimum bytes between copy progress output. Integer 1024 * 1024 rgw_admin_entry The entry point for an admin request URL. String admin rgw_content_length_compat Enable compatibility handling of FCGI requests with both CONTENT_LENGTH AND HTTP_CONTENT_LENGTH set. Boolean false rgw_bucket_default_quota_max_objects The default maximum number of objects per bucket. This value is set on new users if no other quota is specified. It has no effect on existing users. This variable should be set in the [client] or the [global] section so it is automatically applied to radosgw-admin commands. Integer -1 rgw_bucket_quota_ttl The amount of time in seconds cached quota information is trusted. After this timeout, the quota information will be re-fetched from the cluster. Integer 600 rgw_user_quota_bucket_sync_interval The amount of time in seconds bucket quota information is accumulated before syncing to the cluster. During this time, other RGW instances will not see the changes in bucket quota stats from operations on this instance. Integer 180 rgw_user_quota_sync_interval The amount of time in seconds user quota information is accumulated before syncing to the cluster. During this time, other RGW instances will not see the changes in user quota stats from operations on this instance. Integer 3600 * 24 log_meta A zone parameter to determine whether or not the gateway logs the metadata operations. Boolean false log_data A zone parameter to determine whether or not the gateway logs the data operations. Boolean false sync_from_all A radosgw-admin command to set or unset whether zone syncs from all zonegroup peers. Boolean false A.2. About pools Ceph zones map to a series of Ceph Storage Cluster pools. Manually Created Pools vs. Generated Pools If the user key for the Ceph Object Gateway contains write capabilities, the gateway has the ability to create pools automatically. This is convenient for getting started. However, the Ceph Object Storage Cluster uses the placement group default values unless they were set in the Ceph configuration file. Additionally, Ceph will use the default CRUSH hierarchy. These settings are NOT ideal for production systems. The default pools for the Ceph Object Gateway's default zone include: .rgw.root .default.rgw.control .default.rgw.meta .default.rgw.log .default.rgw.buckets.index .default.rgw.buckets.data .default.rgw.buckets.non-ec The Ceph Object Gateway creates pools on a per zone basis. If you create the pools manually, prepend the zone name. The system pools store objects related to, for example, system control, logging, and user information. By convention, these pool names have the zone name prepended to the pool name. .<zone-name>.rgw.control : The control pool. .<zone-name>.log : The log pool contains logs of all bucket/container and object actions, such as create, read, update, and delete. .<zone-name>.rgw.buckets.index : This pool stores the index of the buckets. .<zone-name>.rgw.buckets.data : This pool stores the data of the buckets. .<zone-name>.rgw.meta : The metadata pool stores user_keys and other critical metadata. .<zone-name>.meta:users.uid : The user ID pool contains a map of unique user IDs. .<zone-name>.meta:users.keys : The keys pool contains access keys and secret keys for each user ID. .<zone-name>.meta:users.email : The email pool contains email addresses associated with a user ID. .<zone-name>.meta:users.swift : The Swift pool contains the Swift subuser information for a user ID. Ceph Object Gateways store data for the bucket index ( index_pool ) and bucket data ( data_pool ) in placement pools. These may overlap; that is, you may use the same pool for the index and the data. The index pool for default placement is {zone-name}.rgw.buckets.index and for the data pool for default placement is {zone-name}.rgw.buckets . Name Description Type Default rgw_zonegroup_root_pool The pool for storing all zone group-specific information. String .rgw.root rgw_zone_root_pool The pool for storing zone-specific information. String .rgw.root A.3. Lifecycle settings As a storage administrator, you can set various bucket lifecycle options for a Ceph Object Gateway. These options contain default values. If you do not specify each option, then the default value is set automatically. To set specific values for these options, update the configuration database by using the ceph config set client.rgw OPTION VALUE command. Name Description Type Default rgw_lc_debug_interval For developer use only to debug lifecycle rules by scaling expiration rules from days into an interval in seconds. Red Hat recommends that this option not be used in a production cluster. Integer -1 rgw_lc_lock_max_time The timeout value used internally by the Ceph Object Gateway. Integer 90 rgw_lc_max_objs Controls the sharding of the RADOS Gateway internal lifecycle work queues, and should only be set as part of a deliberate resharding workflow. Red Hat recommends not changing this setting after the setup of your cluster, without first contacting Red Hat support. Integer 32 rgw_lc_max_rules The number of lifecycle rules to include in one, per bucket, lifecycle configuration document. The Amazon Web Service (AWS) limit is 1000 rules. Integer 1000 rgw_lc_max_worker The number of lifecycle worker threads to run in parallel, processing bucket and index shards simultaneously. Red Hat does not recommend setting a value larger than 10 without contacting Red Hat support. Integer 3 rgw_lc_max_wp_worker The number of buckets that each lifecycle worker thread can process in parallel. Red Hat does not recommend setting a value larger than 10 without contacting Red Hat Support. Integer 3 rgw_lc_thread_delay A delay, in milliseconds, that can be injected into shard processing at several points. The default value is 0. Setting a value from 10 to 100 ms would reduce CPU utilization on RADOS Gateway instances and reduce the proportion of workload capacity of lifecycle threads relative to ingest if saturation is being observed. Integer 0 A.4. Swift settings Name Description Type Default rgw_enforce_swift_acls Enforces the Swift Access Control List (ACL) settings. Boolean true rgw_swift_token_expiration The time in seconds for expiring a Swift token. Integer 24 * 3600 rgw_swift_url The URL for the Ceph Object Gateway Swift API. String None rgw_swift_url_prefix The URL prefix for the Swift API, for example, http://fqdn.com/swift . swift N/A rgw_swift_auth_url Default URL for verifying v1 auth tokens (if not using internal Swift auth). String None rgw_swift_auth_entry The entry point for a Swift auth URL. String auth A.5. Logging settings Name Description Type Default debug_rgw_datacache Low level D3N logs can be enabled by the debug_rgw_datacache subsystem (up to debug_rgw_datacache = 30 ) Integer 1/5 rgw_log_nonexistent_bucket Enables Ceph Object Gateway to log a request for a non-existent bucket. Boolean false rgw_log_object_name The logging format for an object name. See manpage date for details about format specifiers. Date %Y-%m-%d-%H-%i-%n rgw_log_object_name_utc Whether a logged object name includes a UTC time. If false , it uses the local time. Boolean false rgw_usage_max_shards The maximum number of shards for usage logging. Integer 32 rgw_usage_max_user_shards The maximum number of shards used for a single user's usage logging. Integer 1 rgw_enable_ops_log Enable logging for each successful Ceph Object Gateway operation. Boolean false rgw_enable_usage_log Enable the usage log. Boolean false rgw_ops_log_rados Whether the operations log should be written to the Ceph Storage Cluster backend. Boolean true rgw_ops_log_socket_path The Unix domain socket for writing operations logs. String None rgw_ops_log_data-backlog The maximum data backlog data size for operations logs written to a Unix domain socket. Integer 5 << 20 rgw_usage_log_flush_threshold The number of dirty merged entries in the usage log before flushing synchronously. Integer 1024 rgw_usage_log_tick_interval Flush pending usage log data every n seconds. Integer 30 rgw_intent_log_object_name The logging format for the intent log object name. See manpage date for details about format specifiers. Date %Y-%m-%d-%i-%n rgw_intent_log_object_name_utc Whether the intent log object name includes a UTC time. If false , it uses the local time. Boolean false rgw_data_log_window The data log entries window in seconds. Integer 30 rgw_data_log_changes_size The number of in-memory entries to hold for the data changes log. Integer 1000 rgw_data_log_num_shards The number of shards (objects) on which to keep the data changes log. Integer 128 rgw_data_log_obj_prefix The object name prefix for the data log. String data_log rgw_replica_log_obj_prefix The object name prefix for the replica log. String replica log rgw_md_log_max_shards The maximum number of shards for the metadata log. Integer 64 rgw_log_http_headers Comma-delimited list of HTTP headers to include with ops log entries. Header names are case insensitive, and use the full header name with words separated by underscores. String None Note Changing the rgw_data_log_num_shards value is not supported. A.6. Keystone settings Name Description Type Default rgw_keystone_url The URL for the Keystone server. String None rgw_keystone_admin_token The Keystone admin token (shared secret). String None rgw_keystone_accepted_roles The roles required to serve requests. String Member, admin rgw_keystone_token_cache_size The maximum number of entries in each Keystone token cache. Integer 10000 A.7. Keystone integration configuration options You can integrate your configuration options into Keystone. See below for a detailed description of the available Keystone integration configuration options: Important After updating the Ceph configuration file, you must copy the new Ceph configuration file to all Ceph nodes in the storage cluster. rgw_s3_auth_use_keystone Description If set to true , the Ceph Object Gateway will authenticate users using Keystone. Type Boolean Default false nss_db_path Description The path to the NSS database. Type String Default "" rgw_keystone_url Description The URL for the administrative RESTful API on the Keystone server. Type String Default "" rgw_keystone_admin_token Description The token or shared secret that is configured internally in Keystone for administrative requests. Type String Default "" rgw_keystone_admin_user Description The keystone admin user name. Type String Default "" rgw_keystone_admin_password Description The keystone admin user password. Type String Default "" rgw_keystone_admin_tenant Description The Keystone admin user tenant for keystone v2.0. Type String Default "" rgw_keystone_admin_project Description the keystone admin user project for keystone v3. Type String Default "" rgw_trust_forwarded_https Description When a proxy in front of the Ceph Object Gateway is used for SSL termination, it does not whether incoming http connections are secure. Enable this option to trust the forwarded and X-forwarded headers sent by the proxy when determining when the connection is secure. This is mainly required for server-side encryption. Type Boolean Default false rgw_swift_account_in_url Description Whether the Swift account is encoded in the URL path. You must set this option to true and update the Keystone service catalog if you want the Ceph Object Gateway to support publicly-readable containers and temporary URLs. Type Boolean Default false rgw_keystone_admin_domain Description The Keystone admin user domain. Type String Default "" rgw_keystone_api_version Description The version of the Keystone API to use. Valid options are 2 or 3 . Type Integer Default 2 rgw_keystone_accepted_roles Description The roles required to serve requests. Type String Default member, Member, admin , rgw_keystone_accepted_admin_roles Description The list of roles allowing a user to gain administrative privileges. Type String Default ResellerAdmin, swiftoperator rgw_keystone_token_cache_size Description The maximum number of entries in the Keystone token cache. Type Integer Default 10000 rgw_keystone_verify_ssl Description If true Ceph will try to verify Keystone's SSL certificate. Type Boolean Default true rgw_keystone_implicit_tenants Description Create new users in their own tenants of the same name. Set this to true or false under most circumstances. For compatibility with versions of Red Hat Ceph Storage, it is also possible to set this to s3 or swift . This has the effect of splitting the identity space such that only the indicated protocol will use implicit tenants. Some older versions of Red Hat Ceph Storage only supported implicit tenants with Swift. Type String Default false rgw_max_attr_name_len Description The maximum length of metadata name. 0 skips the check. Type Size Default 0 rgw_max_attrs_num_in_req Description The maximum number of metadata items that can be put with a single request. Type uint Default 0 rgw_max_attr_size Description The maximum length of metadata value. 0 skips the check Type Size Default 0 rgw_swift_versioning_enabled Description Enable Swift versioning. Type Boolean Default 0 or 1 rgw_keystone_accepted_reader_roles Description List of roles that can only be used for reads. Type String Default "" rgw_swift_enforce_content_length Description Send content length when listing containers Type String Default false` A.8. LDAP settings Name Description Type Example rgw_ldap_uri A space-separated list of LDAP servers in URI format. String ldaps://<ldap.your.domain> rgw_ldap_searchdn The LDAP search domain name, also known as base domain. String cn=users,cn=accounts,dc=example,dc=com rgw_ldap_binddn The gateway will bind with this LDAP entry (user match). String uid=admin,cn=users,dc=example,dc=com rgw_ldap_secret A file containing credentials for rgw_ldap_binddn . String /etc/openldap/secret rgw_ldap_dnattr LDAP attribute containing Ceph object gateway user names (to form binddns). String uid | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/object_gateway_guide/configuration-reference |
Part IV. Additional director operations and configuration | Part IV. Additional director operations and configuration | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/director_installation_and_usage/additional_director_operations_and_configuration |
Chapter 5. Access Control Lists | Chapter 5. Access Control Lists Files and directories have permission sets for the owner of the file, the group associated with the file, and all other users for the system. However, these permission sets have limitations. For example, different permissions cannot be configured for different users. Thus, Access Control Lists (ACLs) were implemented. The Red Hat Enterprise Linux kernel provides ACL support for the ext3 file system and NFS-exported file systems. ACLs are also recognized on ext3 file systems accessed via Samba. Along with support in the kernel, the acl package is required to implement ACLs. It contains the utilities used to add, modify, remove, and retrieve ACL information. The cp and mv commands copy or move any ACLs associated with files and directories. 5.1. Mounting File Systems Before using ACLs for a file or directory, the partition for the file or directory must be mounted with ACL support. If it is a local ext3 file system, it can mounted with the following command: mount -t ext3 -o acl device-name partition For example: mount -t ext3 -o acl /dev/VolGroup00/LogVol02 /work Alternatively, if the partition is listed in the /etc/fstab file, the entry for the partition can include the acl option: If an ext3 file system is accessed via Samba and ACLs have been enabled for it, the ACLs are recognized because Samba has been compiled with the --with-acl-support option. No special flags are required when accessing or mounting a Samba share. 5.1.1. NFS By default, if the file system being exported by an NFS server supports ACLs and the NFS client can read ACLs, ACLs are utilized by the client system. To disable ACLs on NFS shares when configuring the server, include the no_acl option in the /etc/exports file. To disable ACLs on an NFS share when mounting it on a client, mount it with the no_acl option via the command line or the /etc/fstab file. 5.2. Setting Access ACLs There are two types of ACLs: access ACLs and default ACLs . An access ACL is the access control list for a specific file or directory. A default ACL can only be associated with a directory; if a file within the directory does not have an access ACL, it uses the rules of the default ACL for the directory. Default ACLs are optional. ACLs can be configured: Per user Per group Via the effective rights mask For users not in the user group for the file The setfacl utility sets ACLs for files and directories. Use the -m option to add or modify the ACL of a file or directory: Rules ( rules ) must be specified in the following formats. Multiple rules can be specified in the same command if they are separated by commas. u: uid : perms Sets the access ACL for a user. The user name or UID may be specified. The user may be any valid user on the system. g: gid : perms Sets the access ACL for a group. The group name or GID may be specified. The group may be any valid group on the system. m: perms Sets the effective rights mask. The mask is the union of all permissions of the owning group and all of the user and group entries. o: perms Sets the access ACL for users other than the ones in the group for the file. Permissions ( perms ) must be a combination of the characters r , w , and x for read, write, and execute. If a file or directory already has an ACL, and the setfacl command is used, the additional rules are added to the existing ACL or the existing rule is modified. Example 5.1. Give read and write permissions For example, to give read and write permissions to user andrius: To remove all the permissions for a user, group, or others, use the -x option and do not specify any permissions: Example 5.2. Remove all permissions For example, to remove all permissions from the user with UID 500: 5.3. Setting Default ACLs To set a default ACL, add d: before the rule and specify a directory instead of a file name. Example 5.3. Setting default ACLs For example, to set the default ACL for the /share/ directory to read and execute for users not in the user group (an access ACL for an individual file can override it): 5.4. Retrieving ACLs To determine the existing ACLs for a file or directory, use the getfacl command. In the example below, the getfacl is used to determine the existing ACLs for a file. Example 5.4. Retrieving ACLs The above command returns the following output: If a directory with a default ACL is specified, the default ACL is also displayed as illustrated below. For example, getfacl home/sales/ will display similar output: 5.5. Archiving File Systems With ACLs By default, the dump command now preserves ACLs during a backup operation. When archiving a file or file system with tar , use the --acls option to preserve ACLs. Similarly, when using cp to copy files with ACLs, include the --preserve=mode option to ensure that ACLs are copied across too. In addition, the -a option (equivalent to -dR --preserve=all ) of cp also preserves ACLs during a backup along with other information such as timestamps, SELinux contexts, and the like. For more information about dump , tar , or cp , refer to their respective man pages. The star utility is similar to the tar utility in that it can be used to generate archives of files; however, some of its options are different. Refer to Table 5.1, "Command Line Options for star " for a listing of more commonly used options. For all available options, refer to man star . The star package is required to use this utility. Table 5.1. Command Line Options for star Option Description -c Creates an archive file. -n Do not extract the files; use in conjunction with -x to show what extracting the files does. -r Replaces files in the archive. The files are written to the end of the archive file, replacing any files with the same path and file name. -t Displays the contents of the archive file. -u Updates the archive file. The files are written to the end of the archive if they do not exist in the archive, or if the files are newer than the files of the same name in the archive. This option only works if the archive is a file or an unblocked tape that may backspace. -x Extracts the files from the archive. If used with -U and a file in the archive is older than the corresponding file on the file system, the file is not extracted. -help Displays the most important options. -xhelp Displays the least important options. -/ Do not strip leading slashes from file names when extracting the files from an archive. By default, they are stripped when files are extracted. -acl When creating or extracting, archives or restores any ACLs associated with the files and directories. 5.6. Compatibility with Older Systems If an ACL has been set on any file on a given file system, that file system has the ext_attr attribute. This attribute can be seen using the following command: A file system that has acquired the ext_attr attribute can be mounted with older kernels, but those kernels do not enforce any ACLs which have been set. Versions of the e2fsck utility included in version 1.22 and higher of the e2fsprogs package (including the versions in Red Hat Enterprise Linux 2.1 and 4) can check a file system with the ext_attr attribute. Older versions refuse to check it. 5.7. ACL References Refer to the following man pages for more information. man acl - Description of ACLs man getfacl - Discusses how to get file access control lists man setfacl - Explains how to set file access control lists man star - Explains more about the star utility and its many options | [
"LABEL=/work /work ext3 acl 1 2",
"setfacl -m rules files",
"setfacl -m u:andrius:rw /project/somefile",
"setfacl -x rules files",
"setfacl -x u:500 /project/somefile",
"setfacl -m d:o:rx /share",
"getfacl home/john/picture.png",
"file: home/john/picture.png owner: john group: john user::rw- group::r-- other::r--",
"file: home/sales/ owner: john group: john user::rw- user:barryg:r-- group::r-- mask::r-- other::r-- default:user::rwx default:user:john:rwx default:group::r-x default:mask::rwx default:other::r-x",
"tune2fs -l filesystem-device"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-access_control_lists |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_and_allocating_storage_resources/making-open-source-more-inclusive |
Chapter 38. ip | Chapter 38. ip This chapter describes the commands under the ip command. 38.1. ip availability list List IP availability for network Usage: Table 38.1. Command arguments Value Summary -h, --help Show this help message and exit --ip-version <ip-version> List ip availability of given ip version networks (default is 4) --project <project> List ip availability of given project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 38.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 38.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 38.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 38.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 38.2. ip availability show Show network IP availability details Usage: Table 38.6. Positional arguments Value Summary <network> Show ip availability for a specific network (name or ID) Table 38.7. Command arguments Value Summary -h, --help Show this help message and exit Table 38.8. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 38.9. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 38.10. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 38.11. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack ip availability list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--ip-version <ip-version>] [--project <project>] [--project-domain <project-domain>]",
"openstack ip availability show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <network>"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/ip |
5.5.2. ftrace Documentation | 5.5.2. ftrace Documentation The ftrace framework is fully documented in the following files: ftrace - Function Tracer : file:///usr/share/doc/kernel-doc- version /Documentation/trace/ftrace.txt function tracer guts : file:///usr/share/doc/kernel-doc- version /Documentation/trace/ftrace-design.txt | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/ftrace-doc |
9.4. Automatic Bug Reporting Tool (ABRT) | 9.4. Automatic Bug Reporting Tool (ABRT) As of Red Hat Enterprise Linux 6.6, the Automatic Bug Reporting Tool (ABRT) has been updated to version 2. This update removes several limitations , but involves a number of changes to configuration and behavior. Problem data, such as that pertaining to a crash, is no longer stored in a database. This information is now stored only as files in a problem data directory. As a result, the /etc/abrt/abrt.conf configuration file has been simplified, and some configuration directives are now obsolete, or specified in other locations. The OpenGPGCheck , BlackList , ProcessUnpackaged , and BlackListedPaths directives are no longer specified in /etc/abrt/abrt.conf . Instead, they are specified in the /etc/abrt/abrt-action-save-package-data.conf file. The Database directive is no longer required or supported. The ActionsandReporters directive has been replaced by the post-create event. For further information about events in ABRT 2, see Section 9.4.1, "ABRT Events" . The [AnalyzerActionsAndReporters] section of the abrt.conf file is now obsolete. The directives that were previously configured in this section ( Kerneloops , CCpp , and Python ) have been replaced by the analyze_ * and the report_ * events. For further information about events in ABRT 2, see Section 9.4.1, "ABRT Events" . The functionality of the C/C++ hook has been replaced with the abrt-ccpp service, except for the ReadonlyLocalDebugInfoDirs directive, which has not yet been ported. The functionality of the Python hook has been replaced with the abrt-addon-python package. The functionality of the kernel oops hook has been replaced with the abrt-oops service and the related abrt-dump-oops and abrt-action-kerneloops commands. ABRT provides a number of commands to provide flexible automated reporting of problem data, including the following. reporter-bugzilla Checks for bugs with the same ABRT hash as the specified problem data directory and comments on an existing bug, or creates a new bug, as appropriate. This command requires the libreport-plugin-bugzilla package in addition to the default package. reporter-kerneloops Reports kernel oops to an appropriate site. This command requires the libreport-plugin-kerneloops package in addition to the default package. reporter-mailx Sends the contents of a problem data directory using email. This command requires the libreport-plugin-mailx plug-in in addition to the default package. reporter-print Prints problem data to standard output or a specified file. This command requires the libreport-plugin-logger package in addition to the default package. reporter-rhtsupport Reports problem data to RHT Support. This command requires the libreport-plugin-rhtsupport plug-in in addition to the default package. reporter-upload Uploads a tarball of the problem data directory to a specified URL. This command requires the libreport-plugin-reportuploader package in addition to the default package. 9.4.1. ABRT Events ABRT 2 adds configurable events to the ABRT workflow. Events are triggered when problem data is recorded. They specify actions to perform on problem data and can be used to modify how data is analyzed or to specify a location to which data should be uploaded. You can also have events run only on problem data with certain characteristics. Event configuration files are stored in the /etc/libreport/events.d directory. They contain the following: Event name The name of the event being triggered. This is the first argument of the EVENT parameter. For example, the following event configuration file contains an event called report_Bugzilla . Conditions The conditions that must be matched by the problem data for the event to be triggered on that problem data. In this case, the following event is triggered only if the problem data directory contains an analyzer file that contains a value of Python . Actions The actions that are performed on the problem data when this event is run. In this case, the reporter-bugzilla command is run. For further details, see the man page: | [
"EVENT=report_Bugzilla analyzer=Python reporter-bugzilla -c /etc/libreport/plugins/Bugzilla.conf",
"EVENT=report_Bugzilla analyzer=Python reporter-bugzilla -c /etc/libreport/plugins/Bugzilla.conf",
"EVENT=report_Bugzilla analyzer=Python reporter-bugzilla -c /etc/libreport/plugins/Bugzilla.conf",
"man report_event.conf"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-kernel-abrt |
16.7. Keyboard Configuration | 16.7. Keyboard Configuration Using your mouse, select the correct layout type (for example, U.S. English) for the keyboard you would prefer to use for the installation and as the system default (refer to Figure 16.3, "Keyboard Configuration" ). Once you have made your selection, click to continue. Figure 16.3. Keyboard Configuration Note To change your keyboard layout type after you have completed the installation, use the Keyboard Configuration Tool . Type the system-config-keyboard command in a shell prompt to launch the Keyboard Configuration Tool . If you are not root, it prompts you for the root password to continue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-kbdconfig-ppc |
Chapter 10. Using Ansible playbooks to manage self-service rules in IdM | Chapter 10. Using Ansible playbooks to manage self-service rules in IdM This section introduces self-service rules in Identity Management (IdM) and describes how to create and edit self-service access rules using Ansible playbooks. Self-service access control rules allow an IdM entity to perform specified operations on its IdM Directory Server entry. Self-service access control in IdM Using Ansible to ensure that a self-service rule is present Using Ansible to ensure that a self-service rule is absent Using Ansible to ensure that a self-service rule has specific attributes Using Ansible to ensure that a self-service rule does not have specific attributes 10.1. Self-service access control in IdM Self-service access control rules define which operations an Identity Management (IdM) entity can perform on its IdM Directory Server entry: for example, IdM users have the ability to update their own passwords. This method of control allows an authenticated IdM entity to edit specific attributes within its LDAP entry, but does not allow add or delete operations on the entire entry. Warning Be careful when working with self-service access control rules: configuring access control rules improperly can inadvertently elevate an entity's privileges. 10.2. Using Ansible to ensure that a self-service rule is present The following procedure describes how to use an Ansible playbook to define self-service rules and ensure their presence on an Identity Management (IdM) server. In this example, the new Users can manage their own name details rule grants users the ability to change their own givenname , displayname , title and initials attributes. This allows them to, for example, change their display name or initials if they want to. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the new self-service rule. Set the permission variable to a comma-separated list of permissions to grant: read and write . Set the attribute variable to a list of attributes that users can manage themselves: givenname , displayname , title , and initials . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory The /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 10.3. Using Ansible to ensure that a self-service rule is absent The following procedure describes how to use an Ansible playbook to ensure a specified self-service rule is absent from your IdM configuration. The example below describes how to make sure the Users can manage their own name details self-service rule does not exist in IdM. This will ensure that users cannot, for example, change their own display name or initials. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule. Set the state variable to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory Sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 10.4. Using Ansible to ensure that a self-service rule has specific attributes The following procedure describes how to use an Ansible playbook to ensure that an already existing self-service rule has specific settings. In the example, you ensure the Users can manage their own name details self-service rule also has the surname member attribute. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The Users can manage their own name details self-service rule exists in IdM. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-member-present.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-member-present-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule to modify. Set the attribute variable to surname . Set the action variable to member . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file available in the /usr/share/doc/ansible-freeipa/ directory The sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory 10.5. Using Ansible to ensure that a self-service rule does not have specific attributes The following procedure describes how to use an Ansible playbook to ensure that a self-service rule does not have specific settings. You can use this playbook to make sure a self-service rule does not grant undesired access. In the example, you ensure the Users can manage their own name details self-service rule does not have the givenname and surname member attributes. Prerequisites On the control node: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. The Users can manage their own name details self-service rule exists in IdM. Procedure Navigate to the ~/ MyPlaybooks / directory: Make a copy of the selfservice-member-absent.yml file located in the /usr/share/doc/ansible-freeipa/playbooks/selfservice/ directory: Open the selfservice-member-absent-copy.yml Ansible playbook file for editing. Adapt the file by setting the following variables in the ipaselfservice task section: Set the ipaadmin_password variable to the password of the IdM administrator. Set the name variable to the name of the self-service rule you want to modify. Set the attribute variable to givenname and surname . Set the action variable to member . Set the state variable to absent . This is the modified Ansible playbook file for the current example: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: Additional resources Self-service access control in IdM The README-selfservice.md file in the /usr/share/doc/ansible-freeipa/ directory The sample playbooks in the /usr/share/doc/ansible-freeipa/playbooks/selfservice directory | [
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-present.yml selfservice-present-copy.yml",
"--- - name: Self-service present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" permission: read, write attribute: - givenname - displayname - title - initials",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-absent.yml selfservice-absent-copy.yml",
"--- - name: Self-service absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-present.yml selfservice-member-present-copy.yml",
"--- - name: Self-service member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attribute surname is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - surname action: member",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-absent.yml selfservice-member-absent-copy.yml",
"--- - name: Self-service member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attributes givenname and surname are absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - givenname - surname action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-absent-copy.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/using-ansible-playbooks-to-manage-self-service-rules-in-idm_using-ansible-to-install-and-manage-identity-management |
Chapter 10. Scaling Compute nodes with director Operator | Chapter 10. Scaling Compute nodes with director Operator If you require more or fewer compute resources for your overcloud, you can scale the number of Compute nodes according to your requirements. 10.1. Adding Compute nodes to your overcloud with the director Operator To add more Compute nodes to your overcloud, you must increase the node count for the compute OpenStackBaremetalSet resource. When a new node is provisioned, a new OpenStackConfigGenerator resource is created to generate a new set of Ansible playbooks. Use the OpenStackConfig Version to create or update the OpenStackDeploy object to reapply the Ansible configuration to your overcloud Prerequisites Ensure your OpenShift Container Platform cluster is operational and you have installed the director Operator correctly. Deploy and configure an overcloud that runs in your OCP cluster. Ensure that you have installed the oc command line tool on your workstation. Check that you have enough hosts in a ready state in the openshift-machine-api namespace. Run the oc get baremetalhosts -n openshift-machine-api command to check the hosts available. For more information on managing your bare metal hosts, see "Managing bare metal hosts" Procedure Modify the YAML configuration for the compute OpenStackBaremetalSet and increase count parameter for the resource: The OpenStackBaremetalSet resource automatically provisions new nodes with the Red Hat Enterprise Linux base operating system. Wait until the provisioning process completes. Check the nodes periodically to determine the readiness of the nodes: Generate the Ansible Playbooks using OpenStackConfigGenerator, see Configuring overcloud software with the director Operator . Additional resources "Managing bare metal hosts" 10.2. Removing Compute nodes from your overcloud with the director Operator To remove a Compute node from your overcloud, you must disable the Compute node, mark it for deletion, and decrease the node count for the compute OpenStackBaremetalSet resource. Note If you scale the overcloud with a new node in the same role, the node reuses the host names starting with the lowest ID suffix and corresponding IP reservation. Prerequisites The workloads on the Compute nodes have been migrated to other Compute nodes. For more information, see Migrating virtual machine instances between Compute nodes . Procedure Access the remote shell for openstackclient : Identify the Compute node that you want to remove: Disable the Compute service on the node to prevent the node from scheduling new instances: Annotate the bare-metal node to prevent Metal 3 from starting the node: Replace <node> with the name of the BareMetalHost resource. Replace <metal3-pod> with the name of your metal3 pod. Log in to the Compute node as the root user and shut down the bare-metal node: If the Compute node is not accessible, complete the following steps: Log in to a Controller node as the root user. If Instance HA is enabled, disable the STONITH device for the Compute node: Replace <stonith_resource_name> with the name of the STONITH resource that corresponds to the node. The resource name uses the the format <resource_agent>-<host_mac> . You can find the resource agent and the host MAC address in the FencingConfig section of the fencing.yaml file. Use IPMI to power off the bare-metal node. For more information, see your hardware vendor documentation. Retrieve the BareMetalHost resource that corresponds to the node that you want to remove: To change the status of the annotatedForDeletion parameter to true in the OpenStackBaremetalSet resource, annotate the BareMetalHost resource with osp-director.openstack.org/delete-host=true : Optional: Confirm that the annotatedForDeletion status has changed to true in the OpenStackBaremetalSet resource: Decrease the count parameter for the compute OpenStackBaremetalSet resource: When you reduce the resource count of the OpenStackBaremetalSet resource, you trigger the corresponding controller to handle the resource deletion, which causes the following actions: Director Operator deletes the corresponding IP reservations from OpenStackIPSet and OpenStackNetConfig for the node. Director Operator flags the IP reservation entry in the OpenStackNet resource as deleted: Optional: To make the IP reservations of the deleted OpenStackBaremetalSet resource available for other roles to use, set the value of the spec.preserveReservations parameter to false in the OpenStackNetConfig object. Access the remote shell for openstackclient : Remove the Compute service entries from the overcloud: Check the Compute network agents entries in the overcloud and remove them if they exist: Exit from openstackclient : | [
"oc patch osbms compute --type=merge --patch '{\"spec\":{\"count\":3}}' -n openstack",
"oc get baremetalhosts -n openshift-machine-api oc get openstackbaremetalset",
"oc rsh -n openstack openstackclient",
"openstack compute service list",
"openstack compute service set <hostname> nova-compute --disable",
"oc annotate baremetalhost <node> baremetalhost.metal3.io/detached=true oc logs --since=1h <metal3-pod> metal3-baremetal-operator | grep -i detach oc get baremetalhost <node> -o json | jq .status.operationalStatus \"detached\"",
"shutdown -h now",
"pcs stonith disable <stonith_resource_name>",
"oc get openstackbaremetalset compute -o json | jq '.status.baremetalHosts | to_entries[] | \"\\(.key) => \\(.value | .hostRef)\"' \"compute-0, openshift-worker-3\" \"compute-1, openshift-worker-4\"",
"oc annotate -n openshift-machine-api bmh/openshift-worker-3 osp-director.openstack.org/delete-host=true --overwrite",
"oc get openstackbaremetalset compute -o json -n openstack | jq .status { \"baremetalHosts\": { \"compute-0\": { \"annotatedForDeletion\": true, \"ctlplaneIP\": \"192.168.25.105/24\", \"hostRef\": \"openshift-worker-3\", \"hostname\": \"compute-0\", \"networkDataSecretName\": \"compute-cloudinit-networkdata-openshift-worker-3\", \"provisioningState\": \"provisioned\", \"userDataSecretName\": \"compute-cloudinit-userdata-openshift-worker-3\" }, \"compute-1\": { \"annotatedForDeletion\": false, \"ctlplaneIP\": \"192.168.25.106/24\", \"hostRef\": \"openshift-worker-4\", \"hostname\": \"compute-1\", \"networkDataSecretName\": \"compute-cloudinit-networkdata-openshift-worker-4\", \"provisioningState\": \"provisioned\", \"userDataSecretName\": \"compute-cloudinit-userdata-openshift-worker-4\" } }, \"provisioningStatus\": { \"readyCount\": 2, \"reason\": \"All requested BaremetalHosts have been provisioned\", \"state\": \"provisioned\" } }",
"oc patch openstackbaremetalset compute --type=merge --patch '{\"spec\":{\"count\":1}}' -n openstack",
"oc get osnet ctlplane -o json -n openstack | jq .status.reservations { \"compute-0\": { \"deleted\": true, \"ip\": \"172.22.0.140\" }, \"compute-1\": { \"deleted\": false, \"ip\": \"172.22.0.100\" }, \"controller-0\": { \"deleted\": false, \"ip\": \"172.22.0.120\" }, \"controlplane\": { \"deleted\": false, \"ip\": \"172.22.0.110\" }, \"openstackclient-0\": { \"deleted\": false, \"ip\": \"172.22.0.251\" }",
"oc rsh openstackclient -n openstack",
"openstack compute service list openstack compute service delete <service-id>",
"openstack network agent list for AGENT in USD(openstack network agent list --host <scaled-down-node> -c ID -f value) ; do openstack network agent delete USDAGENT ; done",
"exit"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/rhosp_director_operator_for_openshift_container_platform/assembly_scaling-compute-nodes-with-director-operator_rhosp-director-operator |
Building your RHEL AI environment | Building your RHEL AI environment Red Hat Enterprise Linux AI 1.2 Creating accounts, initalizing RHEL AI, downloading models, and serving/chat customizations Red Hat RHEL AI Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html/building_your_rhel_ai_environment/index |
Declarative cluster configuration | Declarative cluster configuration Red Hat OpenShift GitOps 1.15 Configuring an OpenShift cluster with cluster configurations by using OpenShift GitOps and creating and synchronizing applications in the default and code mode by using the GitOps CLI Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html/declarative_cluster_configuration/index |
Chapter 145. KafkaRebalanceSpec schema reference | Chapter 145. KafkaRebalanceSpec schema reference Used in: KafkaRebalance Property Property type Description mode string (one of [remove-brokers, full, add-brokers]) Mode to run the rebalancing. The supported modes are full , add-brokers , remove-brokers . If not specified, the full mode is used by default. full mode runs the rebalancing across all the brokers in the cluster. add-brokers mode can be used after scaling up the cluster to move some replicas to the newly added brokers. remove-brokers mode can be used before scaling down the cluster to move replicas out of the brokers to be removed. brokers integer array The list of newly added brokers in case of scaling up or the ones to be removed in case of scaling down to use for rebalancing. This list can be used only with rebalancing mode add-brokers and removed-brokers . It is ignored with full mode. goals string array A list of goals, ordered by decreasing priority, to use for generating and executing the rebalance proposal. The supported goals are available at https://github.com/linkedin/cruise-control#goals . If an empty goals list is provided, the goals declared in the default.goals Cruise Control configuration parameter are used. skipHardGoalCheck boolean Whether to allow the hard goals specified in the Kafka CR to be skipped in optimization proposal generation. This can be useful when some of those hard goals are preventing a balance solution being found. Default is false. rebalanceDisk boolean Enables intra-broker disk balancing, which balances disk space utilization between disks on the same broker. Only applies to Kafka deployments that use JBOD storage with multiple disks. When enabled, inter-broker balancing is disabled. Default is false. excludedTopics string A regular expression where any matching topics will be excluded from the calculation of optimization proposals. This expression will be parsed by the java.util.regex.Pattern class; for more information on the supported format consult the documentation for that class. concurrentPartitionMovementsPerBroker integer The upper bound of ongoing partition replica movements going into/out of each broker. Default is 5. concurrentIntraBrokerPartitionMovements integer The upper bound of ongoing partition replica movements between disks within each broker. Default is 2. concurrentLeaderMovements integer The upper bound of ongoing partition leadership movements. Default is 1000. replicationThrottle integer The upper bound, in bytes per second, on the bandwidth used to move replicas. There is no limit by default. replicaMovementStrategies string array A list of strategy class names used to determine the execution order for the replica movements in the generated optimization proposal. By default BaseReplicaMovementStrategy is used, which will execute the replica movements in the order that they were generated. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaRebalanceSpec-reference |
Preface | Preface Red Hat Quay offers a comprehensive permissions model, which allows administrators the ability to control who can access, manage, and modify repositories at a granular level. The following sections show you how to manage user access, define team roles, set permissions for users and robot accounts, and define the visibility of a repository. These guides include instructions using both the Red Hat Quay UI and the API. The following topics are covered: Role-based access controls Adjusting repository visibility Creating and managing robot accounts Clair vulnerability reporting | null | https://docs.redhat.com/en/documentation/red_hat_quay/3/html/managing_access_and_permissions/pr01 |
Kafka configuration properties | Kafka configuration properties Red Hat Streams for Apache Kafka 2.7 Use configuration properties to configure Kafka components | [
"Further, when in `read_committed` the seekToEnd method will return the LSO .",
"1) If no partition is specified but a key is present, choose a partition based on a hash of the key.",
"2) If no partition or key is present, choose the sticky partition that changes when at least batch.size bytes are produced to the partition. * `org.apache.kafka.clients.producer.RoundRobinPartitioner`: A partitioning strategy where each record in a series of consecutive records is sent to a different partition, regardless of whether the 'key' is provided or not, until partitions run out and the process starts over again. Note: There's a known issue that will cause uneven distribution when a new batch is created. See KAFKA-9965 for more detail.",
"dnf install <package_name>",
"dnf install <path_to_download_package>"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html-single/kafka_configuration_properties/index |
Chapter 6. Selecting a container runtime | Chapter 6. Selecting a container runtime The runc and crun are container runtimes and can be used interchangeably as both implement the OCI runtime specification. The crun container runtime has a couple of advantages over runc, as it is faster and requires less memory. Due to that, the crun container runtime is the recommended container runtime for use. 6.1. The runc container runtime The runc container runtime is a lightweight, portable implementation of the Open Container Initiative (OCI) container runtime specification. The runc runtime shares a lot of low-level code with Docker but it is not dependent on any of the components of the Docker platform. The runc supports Linux namespaces, live migration, and has portable performance profiles. It also provides full support for Linux security features such as SELinux, control groups (cgroups), seccomp, and others. You can build and run images with runc, or you can run OCI-compatible images with runc. 6.2. The crun container runtime The crun is a fast and low-memory footprint OCI container runtime written in C. The crun binary is up to 50 times smaller and up to twice as fast as the runc binary. Using crun, you can also set a minimal number of processes when running your container. The crun runtime also supports OCI hooks. Additional features of crun include: Sharing files by group for rootless containers Controlling the stdout and stderr of OCI hooks Running older versions of systemd on cgroup v2 A C library that is used by other programs Extensibility Portability Additional resources An introduction to crun, a fast and low-memory footprint container runtime 6.3. Running containers with runc and crun With runc or crun, containers are configured using bundles. A bundle for a container is a directory that includes a specification file named config.json and a root filesystem. The root filesystem contains the contents of the container. Note The <runtime> can be crun or runc. Prerequisites The container-tools meta-package is installed. Procedure Pull the registry.access.redhat.com/ubi9/ubi container image: Export the registry.access.redhat.com/ubi9/ubi image to the rhel.tar archive: Create the bundle/rootfs directory: Extract the rhel.tar archive into the bundle/rootfs directory: Create a new specification file named config.json for the bundle: The -b option specifies the bundle directory. The default value is the current directory. Optional: Change the settings: Create an instance of a container named myubi for a bundle: Start a myubi container: Note The name of a container instance must be unique to the host. To start a new instance of a container: # <runtime> start <container_name> Verification List containers started by <runtime> : Additional resources crun and runc man pages on your system An introduction to crun, a fast and low-memory footprint container runtime 6.4. Temporarily changing the container runtime You can use the podman run command with the --runtime option to change the container runtime. Note The <runtime> can be crun or runc. Prerequisites The container-tools meta-package is installed. Procedure Pull the registry.access.redhat.com/ubi9/ubi container image: Change the container runtime using the --runtime option: Optional: List all images: Verification Ensure that the OCI runtime is set to <runtime> in the myubi container: Additional resources An introduction to crun, a fast and low-memory footprint container runtime 6.5. Permanently changing the container runtime You can set the container runtime and its options in the /etc/containers/containers.conf configuration file as a root user or in the USDHOME/.config/containers/containers.conf configuration file as a non-root user. Note The <runtime> can be crun or runc runtime. Prerequisites The container-tools meta-package is installed. Procedure Change the runtime in the /etc/containers/containers.conf file: Run the container named myubi: Verification Ensure that the OCI runtime is set to <runtime> in the myubi container: Additional resources An introduction to crun, a fast and low-memory footprint container runtime containers.conf man page on your system | [
"podman pull registry.access.redhat.com/ubi9/ubi",
"podman export USD(podman create registry.access.redhat.com/ubi9/ubi) > rhel.tar",
"mkdir -p bundle/rootfs",
"tar -C bundle/rootfs -xf rhel.tar",
"<runtime> spec -b bundle",
"vi bundle/config.json",
"<runtime> create -b bundle/ myubi",
"<runtime> start myubi",
"<runtime> list ID PID STATUS BUNDLE CREATED OWNER myubi 0 stopped /root/bundle 2021-09-14T09:52:26.659714605Z root",
"podman pull registry.access.redhat.com/ubi9/ubi",
"podman run --name=myubi -dt --runtime=<runtime> ubi9 e4654eb4df12ac031f1d0f2657dc4ae6ff8eb0085bf114623b66cc664072e69b",
"podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e4654eb4df12 registry.access.redhat.com/ubi9:latest bash 4 seconds ago Up 4 seconds ago myubi",
"podman inspect myubi --format \"{{.OCIRuntime}}\" <runtime>",
"vim /etc/containers/containers.conf [engine] runtime = \" <runtime> \"",
"podman run --name=myubi -dt ubi9 bash Resolved \"ubi9\" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf) Trying to pull registry.access.redhat.com/ubi9:latest... Storing signatures",
"podman inspect myubi --format \"{{.OCIRuntime}}\" <runtime>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/building_running_and_managing_containers/selecting-a-container-runtime_building-running-and-managing-containers |
3.12. Common Tunable Parameters | 3.12. Common Tunable Parameters The following parameters are present in every created cgroup, regardless of the subsystem that the cgroup is using: tasks contains a list of processes, represented by their PIDs, that are running in a cgroup. The list of PIDs is not guaranteed to be ordered or unique (that is, it may contain duplicate entries). Writing a PID into the tasks file of a cgroup moves that process into that cgroup. cgroup.procs contains a list of thread groups, represented by their TGIDs, that are running in a cgroup. The list of TGIDs is not guaranteed to be ordered or unique (that is, it may contain duplicate entries). Writing a TGID into the cgroup.procs file of a cgroup moves that thread group into that cgroup. cgroup.event_control along with the cgroup notification API, allows notifications to be sent about a changing status of a cgroup. notify_on_release contains a Boolean value, 1 or 0 , that either enables or disables the execution of the release agent. If the notify_on_release parameter is enabled, the kernel executes the contents of the release_agent file when a cgroup no longer contains any tasks (that is, the cgroup's tasks file contained some PIDs and those PIDs were removed, leaving the file empty). A path to the empty cgroup is provided as an argument to the release agent. The default value of the notify_on_release parameter in the root cgroup is 0 . All non-root cgroups inherit the value in notify_on_release from their parent cgroup. release_agent (present in the root cgroup only) contains a command to be executed when a " notify on release " is triggered. Once a cgroup is emptied of all processes, and the notify_on_release flag is enabled, the kernel runs the command in the release_agent file and supplies it with a relative path (relative to the root cgroup) to the emptied cgroup as an argument. The release agent can be used, for example, to automatically remove empty cgroups; for more information, see Example 3.4, "Automatically removing empty cgroups" . Example 3.4. Automatically removing empty cgroups Follow these steps to configure automatic removal of any emptied cgroup from the cpu cgroup: Create a shell script that removes empty cpu cgroups, place it in, for example, /usr/local/bin , and make it executable. The USD1 variable contains a relative path to the emptied cgroup. In the cpu cgroup, enable the notify_on_release flag: In the cpu cgroup, specify a release agent to be used: Test your configuration to make sure emptied cgroups are properly removed: | [
"~]# cat /usr/local/bin/remove-empty-cpu-cgroup.sh #!/bin/sh rmdir /cgroup/cpu/USD1 ~]# chmod +x /usr/local/bin/remove-empty-cpu-cgroup.sh",
"~]# echo 1 > /cgroup/cpu/notify_on_release",
"~]# echo \"/usr/local/bin/remove-empty-cpu-cgroup.sh\" > /cgroup/cpu/release_agent",
"cpu]# pwd; ls /cgroup/cpu cgroup.event_control cgroup.procs cpu.cfs_period_us cpu.cfs_quota_us cpu.rt_period_us cpu.rt_runtime_us cpu.shares cpu.stat libvirt notify_on_release release_agent tasks cpu]# cat notify_on_release 1 cpu]# cat release_agent /usr/local/bin/remove-empty-cpu-cgroup.sh cpu]# mkdir blue; ls blue cgroup.event_control cgroup.procs cpu.cfs_period_us cpu.cfs_quota_us cpu.rt_period_us cpu.rt_runtime_us cpu.shares cpu.stat libvirt notify_on_release release_agent tasks cpu]# cat blue/notify_on_release 1 cpu]# cgexec -g cpu:blue dd if=/dev/zero of=/dev/null bs=1024k & [1] 8623 cpu]# cat blue/tasks 8623 cpu]# kill -9 8623 cpu]# ls cgroup.event_control cgroup.procs cpu.cfs_period_us cpu.cfs_quota_us cpu.rt_period_us cpu.rt_runtime_us cpu.shares cpu.stat libvirt notify_on_release release_agent tasks"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/resource_management_guide/sec-common_tunable_parameters |
Chapter 1. Machine APIs | Chapter 1. Machine APIs 1.1. ContainerRuntimeConfig [machineconfiguration.openshift.io/v1] Description ContainerRuntimeConfig describes a customized Container Runtime configuration. Type object 1.2. ControllerConfig [machineconfiguration.openshift.io/v1] Description ControllerConfig describes configuration for MachineConfigController. This is currently only used to drive the MachineConfig objects generated by the TemplateController. Type object 1.3. ControlPlaneMachineSet [machine.openshift.io/v1] Description ControlPlaneMachineSet ensures that a specified number of control plane machine replicas are running at any given time. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. KubeletConfig [machineconfiguration.openshift.io/v1] Description KubeletConfig describes a customized Kubelet configuration. Type object 1.5. MachineConfigPool [machineconfiguration.openshift.io/v1] Description MachineConfigPool describes a pool of MachineConfigs. Type object 1.6. MachineConfig [machineconfiguration.openshift.io/v1] Description MachineConfig defines the configuration for a machine Type object 1.7. MachineHealthCheck [machine.openshift.io/v1beta1] Description MachineHealthCheck is the Schema for the machinehealthchecks API Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.8. Machine [machine.openshift.io/v1beta1] Description Machine is the Schema for the machines API Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object 1.9. MachineSet [machine.openshift.io/v1beta1] Description MachineSet ensures that a specified number of machines replicas are running at any given time. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/machine_apis/machine-apis |
Chapter 14. Changing resources for the OpenShift Data Foundation components | Chapter 14. Changing resources for the OpenShift Data Foundation components When you install OpenShift Data Foundation, it comes with pre-defined resources that the OpenShift Data Foundation pods can consume. In some situations with higher I/O load, it might be required to increase these limits. To change the CPU and memory resources on the rook-ceph pods, see Section 14.1, "Changing the CPU and memory resources on the rook-ceph pods" . To tune the resources for the Multicloud Object Gateway (MCG), see Section 14.2, "Tuning the resources for the MCG" . 14.1. Changing the CPU and memory resources on the rook-ceph pods When you install OpenShift Data Foundation, it comes with pre-defined CPU and memory resources for the rook-ceph pods. You can manually increase these values according to the requirements. You can change the CPU and memory resources on the following pods: mgr mds rgw The following example illustrates how to change the CPU and memory resources on the rook-ceph pods. In this example, the existing MDS pod values of cpu and memory are increased from 1 and 4Gi to 2 and 8Gi respectively. Edit the storage cluster: <storagecluster_name> Specify the name of the storage cluster. For example: Add the following lines to the storage cluster Custom Resource (CR): Save the changes and exit the editor. Alternatively, run the oc patch command to change the CPU and memory value of the mds pod: <storagecluster_name> Specify the name of the storage cluster. For example: 14.2. Tuning the resources for the MCG The default configuration for the Multicloud Object Gateway (MCG) is optimized for low resource consumption and not performance. For more information on how to tune the resources for the MCG, see the Red Hat Knowledgebase solution Performance tuning guide for Multicloud Object Gateway (NooBaa) . | [
"oc edit storagecluster -n openshift-storage <storagecluster_name>",
"oc edit storagecluster -n openshift-storage ocs-storagecluster",
"spec: resources: mds: limits: cpu: 2 memory: 8Gi requests: cpu: 2 memory: 8Gi",
"oc patch -n openshift-storage storagecluster <storagecluster_name> --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}}'",
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch ' {\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}} '"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/troubleshooting_openshift_data_foundation/changing-resources-for-the-openshift-data-foundation-components_rhodf |
Chapter 53. JAXB | Chapter 53. JAXB Since Camel 1.0 JAXB is a Data Format which uses the JAXB XML marshalling standard to unmarshal an XML payload into Java objects or to marshal Java objects into an XML payload. 53.1. Dependencies When using jaxb with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jaxb-starter</artifactId> </dependency> 53.2. Options The JAXB dataformat supports 20 options, which are listed below. Name Default Java Type Description contextPath String Required Package name where your JAXB classes are located. contextPathIsClassName false Boolean This can be set to true to mark that the contextPath is referring to a classname and not a package name. schema String To validate against an existing schema. Your can use the prefix classpath:, file: or http: to specify how the resource should be resolved. You can separate multiple schema files by using the ',' character. schemaSeverityLevel 0 Enum Sets the schema severity level to use when validating against a schema. This level determines the minimum severity error that triggers JAXB to stop continue parsing. The default value of 0 (warning) means that any error (warning, error or fatal error) will trigger JAXB to stop. There are the following three levels: 0=warning, 1=error, 2=fatal error. Enum values: 0 1 2 prettyPrint false Boolean To enable pretty printing output nicely formatted. Is by default false. objectFactory false Boolean Whether to allow using ObjectFactory classes to create the POJO classes during marshalling. This only applies to POJO classes that has not been annotated with JAXB and providing jaxb.index descriptor files. ignoreJAXBElement false Boolean Whether to ignore JAXBElement elements - only needed to be set to false in very special use-cases. mustBeJAXBElement false Boolean Whether marhsalling must be java objects with JAXB annotations. And if not then it fails. This option can be set to false to relax that, such as when the data is already in XML format. filterNonXmlChars false Boolean To ignore non xml characheters and replace them with an empty space. encoding String To overrule and use a specific encoding. fragment false Boolean To turn on marshalling XML fragment trees. By default JAXB looks for XmlRootElement annotation on given class to operate on whole XML tree. This is useful but not always - sometimes generated code does not have XmlRootElement annotation, sometimes you need unmarshall only part of tree. In that case you can use partial unmarshalling. To enable this behaviours you need set property partClass. Camel will pass this class to JAXB's unmarshaler. partClass String Name of class used for fragment parsing. See more details at the fragment option. partNamespace String XML namespace to use for fragment parsing. See more details at the fragment option. namespacePrefixRef String When marshalling using JAXB or SOAP then the JAXB implementation will automatic assign namespace prefixes, such as ns2, ns3, ns4 etc. To control this mapping, Camel allows you to refer to a map which contains the desired mapping. xmlStreamWriterWrapper String To use a custom xml stream writer. schemaLocation String To define the location of the schema. noNamespaceSchemaLocation String To define the location of the namespaceless schema. jaxbProviderProperties String Refers to a custom java.util.Map to lookup in the registry containing custom JAXB provider properties to be used with the JAXB marshaller. contentTypeHeader true Boolean Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. accessExternalSchemaProtocols false String Only in use if schema validation has been enabled. Restrict access to the protocols specified for external reference set by the schemaLocation attribute, Import and Include element. Examples of protocols are file, http, jar:file. false or none to deny all access to external references; a specific protocol, such as file, to give permission to only the protocol; the keyword all to grant permission to all protocols. 53.3. Using the Java DSL The following example uses a named DataFormat of jaxb which is configured with a Java package name to initialize the JAXBContext . DataFormat jaxb = new JaxbDataFormat("com.acme.model"); from("activemq:My.Queue"). unmarshal(jaxb). to("mqseries:Another.Queue"); You can use a named reference to a data format which can then be defined in your Registry such as via your Spring XML file. from("activemq:My.Queue"). unmarshal("myJaxbDataType"). to("mqseries:Another.Queue"); 53.4. Using Spring XML The following example shows how to configure the JaxbDataFormat and use it in multiple routes. <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd"> <bean id="myJaxb" class="org.apache.camel.converter.jaxb.JaxbDataFormat"> <property name="contextPath" value="org.apache.camel.example"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:start"/> <marshal><custom ref="myJaxb"/></marshal> <to uri="direct:marshalled"/> </route> <route> <from uri="direct:marshalled"/> <unmarshal><custom ref="myJaxb"/></unmarshal> <to uri="mock:result"/> </route> </camelContext> </beans> 53.5. Multiple context paths It is possible to use this data format with more than one context path. You can specify multiple context paths using : as a separator, for example com.mycompany:com.mycompany2 . 53.6. Partial marshalling / unmarshalling JAXB 2 supports marshalling and unmarshalling XML tree fragments. By default JAXB looks for the @XmlRootElement annotation on a given class to operate on whole XML tree. Sometimes the generated code does not have the @XmlRootElement annotation and sometimes you need to unmarshall only part of the tree. In that case you can use partial unmarshalling. To enable this behaviour you need set property partClass on the JaxbDataFormat . Camel will pass this class to the JAXB unmarshaller. If JaxbConstants.JAXB_PART_CLASS is set as one of the exchange headers, its value is used to override the partClass property on the JaxbDataFormat . For marshalling you have to add the partNamespace attribute with the QName of the destination namespace. If JaxbConstants.JAXB_PART_NAMESPACE is set as one of the exchange headers, its value is used to override the partNamespace property on the JaxbDataFormat . While setting partNamespace through JaxbConstants.JAXB_PART_NAMESPACE , please note that you need to specify its value in the format {namespaceUri}localPart , as per the example below. .setHeader(JaxbConstants.JAXB_PART_NAMESPACE, constant("{http://www.camel.apache.org/jaxb/example/address/1}address")); 53.7. Fragment JaxbDataFormat has a property named fragment which can set the Marshaller.JAXB_FRAGMENT property on the JAXB Marshaller. If you don't want the JAXB Marshaller to generate the XML declaration, you can set this option to be true . The default value of this property is false . 53.8. Ignoring Non-XML Characters JaxbDataFormat supports ignoring Non-XML Characters . Set the filterNonXmlChars property to true . The JaxbDataFormat will replace any non-XML character with a space character ( " " ) during message marshalling or unmarshalling. You can also set the Exchange property Exchange.FILTER_NON_XML_CHARS . JDK 1.5 JDK 1.6+ Filtering in use StAX API and implementation No Filtering not in use StAX API only No This feature has been tested with Woodstox 3.2.9 and Sun JDK 1.6 StAX implementation. JaxbDataFormat now allows you to customize the XMLStreamWriter used to marshal the stream to XML. Using this configuration, you can add your own stream writer to completely remove, escape, or replace non-XML characters. JaxbDataFormat customWriterFormat = new JaxbDataFormat("org.apache.camel.foo.bar"); customWriterFormat.setXmlStreamWriterWrapper(new TestXmlStreamWriter()); The following example shows using the Spring DSL and also enabling Camel's non-XML filtering: <bean id="testXmlStreamWriterWrapper" class="org.apache.camel.jaxb.TestXmlStreamWriter"/> <jaxb filterNonXmlChars="true" contextPath="org.apache.camel.foo.bar" xmlStreamWriterWrapper="#testXmlStreamWriterWrapper" /> 53.9. Working with the ObjectFactory If you use XJC to create the java class from the schema, you will get an ObjectFactory for your JAXB context. Since the ObjectFactory uses the JAXBElement to hold the reference of the schema and element instance value, JaxbDataformat will ignore the JAXBElement by default and you will get the element instance value instead of the JAXBElement object from the unmarshaled message body. If you want to get the JAXBElement object form the unmarshaled message body, you need to set the JaxbDataFormat ignoreJAXBElement property to be false . 53.10. Setting the encoding You can set the encoding option on the JaxbDataFormat to configure the Marshaller.JAXB_ENCODING encoding property on the JAXB Marshaller. You can setup which encoding to use when you declare the JaxbDataFormat . You can also provide the encoding in the Exchange property Exchange.CHARSET_NAME . This property will override the encoding set on the JaxbDataFormat . 53.11. Controlling namespace prefix mapping When marshalling using JAXB or SOAP then the JAXB implementation will automatic assign namespace prefixes, such as ns2, ns3, ns4 etc. To control this mapping, Camel allows you to refer to a map which contains the desired mapping. For example, in Spring XML we can define a Map with the mapping. In the mapping file below, we map SOAP to use soap as as a prefix. While our custom namespace http://www.mycompany.com/foo/2 is not using any prefix. <util:map id="myMap"> <entry key="http://www.w3.org/2003/05/soap-envelope" value="soap"/> <!-- we don't want any prefix for our namespace --> <entry key="http://www.mycompany.com/foo/2" value=""/> </util:map> To use this in JAXB or SOAP data formats you refer to this map, using the namespacePrefixRef attribute as shown below. Then Camel will lookup in the Registry a java.util.Map with the id myMap , which was what we defined above. <marshal> <soap version="1.2" contextPath="com.mycompany.foo" namespacePrefixRef="myMap"/> </marshal> 53.12. Schema validation The JaxbDataFormat supports validation by marshalling and unmarshalling from / to XML. You can use the prefix classpath: , file: or http: to specify how the resource should be resolved. You can separate multiple schema files by using the , character. Note If the XSD schema files import/access other files, then you need to enable file protocol (or others to allow access). Using the Java DSL, you can configure it in the following way: JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchema("classpath:person.xsd,classpath:address.xsd"); jaxbDataFormat.setAccessExternalSchemaProtocols("file"); You can do the same using the XML DSL: <marshal> <jaxb id="jaxb" schema="classpath:person.xsd,classpath:address.xsd" accessExternalSchemaProtocols="file"/> </marshal> 53.13. Schema Location The JaxbDataFormat supports to specify the SchemaLocation when marshalling the XML. Using the Java DSL, you can configure it in the following way: JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchemaLocation("schema/person.xsd"); You can do the same using the XML DSL: <marshal> <jaxb id="jaxb" schemaLocation="schema/person.xsd"/> </marshal> 53.14. Marshal data that is already XML The JAXB marshaller requires that the message body is JAXB compatible, e.g it is a JAXBElement , a java instance that has JAXB annotations, or extends JAXBElement . There can be situations where the message body is already in XML, e.g from a String type. JaxbDataFormat has an option named mustBeJAXBElement which you can set to false to relax this check and have the JAXB marshaller only attempt marshalling on JAXBElement ( javax.xml.bind.JAXBIntrospector#isElement returns true ). In those situations the marshaller will fallback to marshal the message body as-is. 53.15. Spring Boot Auto-Configuration The component supports 21 options, which are listed below. Name Description Default Type camel.dataformat.jaxb.access-external-schema-protocols Only in use if schema validation has been enabled. Restrict access to the protocols specified for external reference set by the schemaLocation attribute, Import and Include element. Examples of protocols are file, http, jar:file. false or none to deny all access to external references; a specific protocol, such as file, to give permission to only the protocol; the keyword all to grant permission to all protocols. false String camel.dataformat.jaxb.content-type-header Whether the data format should set the Content-Type header with the type from the data format. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSON. true Boolean camel.dataformat.jaxb.context-path Package name where your JAXB classes are located. String camel.dataformat.jaxb.context-path-is-class-name This can be set to true to mark that the contextPath is referring to a classname and not a package name. false Boolean camel.dataformat.jaxb.enabled Whether to enable auto configuration of the jaxb data format. This is enabled by default. Boolean camel.dataformat.jaxb.encoding To overrule and use a specific encoding. String camel.dataformat.jaxb.filter-non-xml-chars To ignore non xml characheters and replace them with an empty space. false Boolean camel.dataformat.jaxb.fragment To turn on marshalling XML fragment trees. By default JAXB looks for XmlRootElement annotation on given class to operate on whole XML tree. This is useful but not always - sometimes generated code does not have XmlRootElement annotation, sometimes you need unmarshall only part of tree. In that case you can use partial unmarshalling. To enable this behaviours you need set property partClass. Camel will pass this class to JAXB's unmarshaler. false Boolean camel.dataformat.jaxb.ignore-j-a-x-b-element Whether to ignore JAXBElement elements - only needed to be set to false in very special use-cases. false Boolean camel.dataformat.jaxb.jaxb-provider-properties Refers to a custom java.util.Map to lookup in the registry containing custom JAXB provider properties to be used with the JAXB marshaller. String camel.dataformat.jaxb.must-be-j-a-x-b-element Whether marhsalling must be java objects with JAXB annotations. And if not then it fails. This option can be set to false to relax that, such as when the data is already in XML format. false Boolean camel.dataformat.jaxb.namespace-prefix-ref When marshalling using JAXB or SOAP then the JAXB implementation will automatic assign namespace prefixes, such as ns2, ns3, ns4 etc. To control this mapping, Camel allows you to refer to a map which contains the desired mapping. String camel.dataformat.jaxb.no-namespace-schema-location To define the location of the namespaceless schema. String camel.dataformat.jaxb.object-factory Whether to allow using ObjectFactory classes to create the POJO classes during marshalling. This only applies to POJO classes that has not been annotated with JAXB and providing jaxb.index descriptor files. false Boolean camel.dataformat.jaxb.part-class Name of class used for fragment parsing. See more details at the fragment option. String camel.dataformat.jaxb.part-namespace XML namespace to use for fragment parsing. See more details at the fragment option. String camel.dataformat.jaxb.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.jaxb.schema To validate against an existing schema. Your can use the prefix classpath:, file: or http: to specify how the resource should be resolved. You can separate multiple schema files by using the ',' character. String camel.dataformat.jaxb.schema-location To define the location of the schema. String camel.dataformat.jaxb.schema-severity-level Sets the schema severity level to use when validating against a schema. This level determines the minimum severity error that triggers JAXB to stop continue parsing. The default value of 0 (warning) means that any error (warning, error or fatal error) will trigger JAXB to stop. There are the following three levels: 0=warning, 1=error, 2=fatal error. 0 Integer camel.dataformat.jaxb.xml-stream-writer-wrapper To use a custom xml stream writer. String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jaxb-starter</artifactId> </dependency>",
"DataFormat jaxb = new JaxbDataFormat(\"com.acme.model\"); from(\"activemq:My.Queue\"). unmarshal(jaxb). to(\"mqseries:Another.Queue\");",
"from(\"activemq:My.Queue\"). unmarshal(\"myJaxbDataType\"). to(\"mqseries:Another.Queue\");",
"<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <bean id=\"myJaxb\" class=\"org.apache.camel.converter.jaxb.JaxbDataFormat\"> <property name=\"contextPath\" value=\"org.apache.camel.example\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:start\"/> <marshal><custom ref=\"myJaxb\"/></marshal> <to uri=\"direct:marshalled\"/> </route> <route> <from uri=\"direct:marshalled\"/> <unmarshal><custom ref=\"myJaxb\"/></unmarshal> <to uri=\"mock:result\"/> </route> </camelContext> </beans>",
".setHeader(JaxbConstants.JAXB_PART_NAMESPACE, constant(\"{http://www.camel.apache.org/jaxb/example/address/1}address\"));",
"JaxbDataFormat customWriterFormat = new JaxbDataFormat(\"org.apache.camel.foo.bar\"); customWriterFormat.setXmlStreamWriterWrapper(new TestXmlStreamWriter());",
"<bean id=\"testXmlStreamWriterWrapper\" class=\"org.apache.camel.jaxb.TestXmlStreamWriter\"/> <jaxb filterNonXmlChars=\"true\" contextPath=\"org.apache.camel.foo.bar\" xmlStreamWriterWrapper=\"#testXmlStreamWriterWrapper\" />",
"<util:map id=\"myMap\"> <entry key=\"http://www.w3.org/2003/05/soap-envelope\" value=\"soap\"/> <!-- we don't want any prefix for our namespace --> <entry key=\"http://www.mycompany.com/foo/2\" value=\"\"/> </util:map>",
"<marshal> <soap version=\"1.2\" contextPath=\"com.mycompany.foo\" namespacePrefixRef=\"myMap\"/> </marshal>",
"JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchema(\"classpath:person.xsd,classpath:address.xsd\"); jaxbDataFormat.setAccessExternalSchemaProtocols(\"file\");",
"<marshal> <jaxb id=\"jaxb\" schema=\"classpath:person.xsd,classpath:address.xsd\" accessExternalSchemaProtocols=\"file\"/> </marshal>",
"JaxbDataFormat jaxbDataFormat = new JaxbDataFormat(); jaxbDataFormat.setContextPath(Person.class.getPackage().getName()); jaxbDataFormat.setSchemaLocation(\"schema/person.xsd\");",
"<marshal> <jaxb id=\"jaxb\" schemaLocation=\"schema/person.xsd\"/> </marshal>"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jaxb-dataformat-component-starter |
Chapter 1. Introduction to IdM API | Chapter 1. Introduction to IdM API You can access the services of the Red Hat Identity Management with command-line and web-based interfaces. With the Identity Management API, you can interact with Identity Management services through the third-party applications and scripts that are written in Python. The Identity Management API has the JavaScript Object Notation Remote Procedure Call (JSON-RPC) interface. To use the automation for various important parts, access the Identity Management API through Python. For example, you can retrieve metadata from the server with all available commands. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_idm_api/con_introduction-to-idm-api_using-idm-api |
function::stack_used | function::stack_used Name function::stack_used - Returns the amount of kernel stack used Synopsis Arguments None Description This function determines how many bytes are currently used in the kernel stack. | [
"stack_used:long()"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-stack-used |
probe::socket.create | probe::socket.create Name probe::socket.create - Creation of a socket Synopsis socket.create Values type Socket type value name Name of this probe protocol Protocol value family Protocol family value requester Requested by user process or the kernel (1 = kernel, 0 = user) Context The requester (see requester variable) Description Fires at the beginning of creating a socket. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-socket-create |
Introduction to plugins | Introduction to plugins Red Hat Developer Hub 1.3 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/introduction_to_plugins/index |
Chapter 7. Configuring the systems and running tests using Cockpit | Chapter 7. Configuring the systems and running tests using Cockpit To run the certification tests using Cockpit, you must first set up the Cockpit, add systems, upload the test plan to Cockpit. 7.1. Setting up the Cockpit server Cockpit is a RHEL tool that lets you change the configuration of your systems as well as monitor their resources from a user-friendly web-based interface. The Cockpit uses RHCert CLI locally and through SSH to other hosts. Note You must set up Cockpit on the same system as the test host. Ensure that the Cockpit can access both the Controller and Compute nodes. For more information on installing and configuring Cockpit, see Getting Started using the RHEL web console on RHEL 8, Getting Started using the RHEL web console on RHEL 9 and Introducing Cockpit . Prerequisites You have installed the Cockpit plugin on the test host. You have enabled the Cockpit service. Procedure Log in to the test host. Install the Cockpit RPM provided by the Red Hat Certification team. You must run Cockpit on port 9090. Verification Log in to the Cockpit web application in your browser, http://<Cockpit_system_IP>:9090/ and verify the addition of Tools Red Hat Certification tab on the left panel. 7.2. Adding the test systems to Cockpit Adding the test host, Controller, and Compute nodes to Cockpit establishes a connection between the test host and each node. Note Repeat the following process for adding each node. Prerequisites You have the IP address of the test host, Controller, and Compute nodes. Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser to launch the Cockpit web application. Enter the username and password, and then click Login . Click the down-arrow on the logged-in cockpit user name-> Add new host . The dialog box displays. In the Host field, enter the IP address or hostname of the system. In the User name field, enter from one of the three applicable accounts: Note Enter "tripleo-admin" if you use RHOSP 17.1 or later. Enter "heat-admin" if you use RHOSP 17 or earlier. Enter "root" if you have configured root as the ssh user for Controller and Compute nodes. Click Accept key and connect . Optional: Select the predefined color or select a new color of your choice for the host added. Click Add . Verification On the left panel, click Tools -> Red Hat Certification . Verify that the system you just added displays under the Hosts section on the right. 7.3. Getting authorization on the Red Hat SSO network Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. On the Cockpit homepage, click Authorize , to establish connectivity with the Red Hat system. The Log in to your Red Hat account page displays. Enter your credentials and click . The Grant access to rhcert-cwe page displays. Click Grant access . A confirmation message displays a successful device login. You are now connected to the Cockpit web application. 7.4. Downloading test plans in Cockpit from Red Hat certification portal For Non-authorized or limited access users: To download the test plan, see Downloading the test plan from Red Hat Certification portal . For authorized users: Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Test Plans tab. A list of Recent Certification Support Cases will appear. Click Download Test Plan . A message displays confirming the successful addition of the test plan. The downloaded test plan will be listed under the File Name of the Test Plan Files section. 7.5. Using the test plan to provision the Controller and Compute nodes for testing Provisioning the Controller and Compute nodes through the test host performs several operations, such as installing the required packages on the two nodes based on the certification type and creating a final test plan to run. The final test plan is generated based on the test roles defined for each node and has a list of common tests taken from both the test plan provided by Red Hat and tests generated on discovering the system requirements. For instance, required OpenStack packages will be installed if the test plan is designed for certifying an OpenStack plugin. Prerequisites You have downloaded the test plan provided by Red Hat . Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools -> Red Hat Certification in the left navigation panel. Click the Hosts tab to see the list of systems added. Click the Test Plans tab and click Upload . In the Upload Test Plan dialog box, click Upload , and then select the new test plan .xml file saved on the test host. Click Upload to Host . A successful upload message displays along with the file uploaded. Optionally, if you want to reuse the previously uploaded test plan, then select it again to reupload. Note During the certification process, if you receive a redesigned test plan for the ongoing product certification, then you can upload it following the step. However, you must run rhcert-clean all in the Terminal tab before proceeding. Click Provision beside the test plan you want to use. In the Role field, enter the IP address of the Controller node, and from the Host drop-down menu, select Controller . In the Role field, enter the IP address of the Compute node, and from the Host drop-down menu, select Compute . In the Provisioning Host field, enter the IP address of the test host. Select the Run with sudo check box. Click Provision . The terminal is displayed. 7.6. Running the certification tests using Cockpit Note The tests run in the foreground on the Controller node, they are interactive and will prompt you for inputs, whereas the tests run in the background on the Compute node and are non-interactive. Prerequisites You have prepared the Controller and Compute nodes Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and click Login . Select Tools Red Hat Certification in the left panel. Click the Hosts tab and click on the host on which you want to run the tests, then click the Terminal tab. Click Run . The rhcert-run command will appear and run on the Terminal window. When prompted, choose whether to run each test by typing yes or no . You can also run particular tests from the list by typing select . 7.7. Reviewing and downloading the test results file Procedure Enter http:// <Cockpit_system_IP> :9090/ in your browser address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab to view the test results generated. Optional: Click Preview to view the results of each test. Click Download beside the result files. By default, the result file is saved as /var/rhcert/save/rhcert-multi-openstack-<certification ID>-<timestamp>.xml . 7.8. Submitting the test results from Cockpit to the Red Hat Certification Portal Procedure Enter http://<Cockpit_system_IP>:9090/ in your browser's address bar to launch the Cockpit web application. Enter the username and password, and then click Login . Select Tools Red Hat Certification in the left panel. Click the Result Files tab and select the case number from the displayed list. For the authorized users click Submit . A message displays confirming the successful upload of the test result file. For non-authorized users see, Uploading the results file of the executed test plan to Red Hat Certification portal . The test result file of the executed test plan will be uploaded to the Red Hat Certification portal. 7.9. Uploading the test results file to Red Hat Certification portal Prerequisites You have downloaded the test results file from the test host. Procedure Log in to Red Hat Certification portal . On the homepage, enter the product case number in the search bar. Select the case number from the list that is displayed. On the Summary tab, under the Files section, click Upload . steps Red Hat will review the results file you submitted and suggest the steps. For more information, visit Red Hat Certification portal . | [
"yum install redhat-certification-cockpit"
] | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_workflow_guide/assembly_rhosp-wf-configuring-the-systems-and-running-tests-using-cockpit_rhosp-wf-setting-test-environment |
Chapter 9. TokenRequest [authentication.k8s.io/v1] | Chapter 9. TokenRequest [authentication.k8s.io/v1] Description TokenRequest requests a token for a given service account. Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object TokenRequestSpec contains client provided parameters of a token request. status object TokenRequestStatus is the result of a token request. 9.1.1. .spec Description TokenRequestSpec contains client provided parameters of a token request. Type object Required audiences Property Type Description audiences array (string) Audiences are the intendend audiences of the token. A recipient of a token must identify themself with an identifier in the list of audiences of the token, and otherwise should reject the token. A token issued for multiple audiences may be used to authenticate against any of the audiences listed but implies a high degree of trust between the target audiences. boundObjectRef object BoundObjectReference is a reference to an object that a token is bound to. expirationSeconds integer ExpirationSeconds is the requested duration of validity of the request. The token issuer may return a token with a different validity duration so a client needs to check the 'expiration' field in a response. 9.1.2. .spec.boundObjectRef Description BoundObjectReference is a reference to an object that a token is bound to. Type object Property Type Description apiVersion string API version of the referent. kind string Kind of the referent. Valid kinds are 'Pod' and 'Secret'. name string Name of the referent. uid string UID of the referent. 9.1.3. .status Description TokenRequestStatus is the result of a token request. Type object Required token expirationTimestamp Property Type Description expirationTimestamp Time ExpirationTimestamp is the time of expiration of the returned token. token string Token is the opaque bearer token. 9.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token POST : create token of a ServiceAccount 9.2.1. /api/v1/namespaces/{namespace}/serviceaccounts/{name}/token Table 9.1. Global path parameters Parameter Type Description name string name of the TokenRequest Table 9.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create token of a ServiceAccount Table 9.3. Body parameters Parameter Type Description body TokenRequest schema Table 9.4. HTTP responses HTTP code Reponse body 200 - OK TokenRequest schema 201 - Created TokenRequest schema 202 - Accepted TokenRequest schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authorization_apis/tokenrequest-authentication-k8s-io-v1 |
3.8. Obtaining Information about Control Groups | 3.8. Obtaining Information about Control Groups The libcgroup-tools package contains several utilities for obtaining information about controllers, control groups, and their parameters. Listing Controllers To find the controllers that are available in your kernel and information on how they are mounted together to hierarchies, execute: Alternatively, to find the mount points of particular subsystems, execute the following command: Here controllers stands for a list of the subsystems in which you are interested. Note that the lssubsys -m command returns only the top-level mount point per each hierarchy. Finding Control Groups To list the cgroups on a system, execute as root : To restrict the output to a specific hierarchy, specify a controller and a path in the format controller : path . For example: The above command lists only subgroups of the adminusers cgroup in the hierarchy to which the cpuset controller is attached. Displaying Parameters of Control Groups To display the parameters of specific cgroups, run: where parameter is a pseudo-file that contains values for a controller, and list_of_cgroups is a list of cgroups separated with spaces. If you do not know the names of the actual parameters, use a command similar to: | [
"~]USD cat /proc/cgroups",
"~]USD lssubsys -m controllers",
"~]# lscgroup",
"~]USD lscgroup cpuset:adminusers",
"~]USD cgget -r parameter list_of_cgroups",
"~]USD cgget -g cpuset /"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/resource_management_guide/sec-obtaining_information_about_control_groups-libcgroup |
Red Hat Quay API Guide | Red Hat Quay API Guide Red Hat Quay 3.10 Red Hat Quay API Guide Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/red_hat_quay_api_guide/index |
Chapter 30. Configuring the cluster-wide proxy | Chapter 30. Configuring the cluster-wide proxy Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure OpenShift Container Platform to use a proxy by modifying the Proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. After you enable a cluster-wide egress proxy for your cluster on a supported platform, Red Hat Enterprise Linux CoreOS (RHCOS) populates the status.noProxy parameter with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your install-config.yaml file that exists on the supported platform. Note As a postinstallation task, you can change the networking.clusterNetwork[].cidr value, but not the networking.machineNetwork[].cidr and the networking.serviceNetwork[] values. For more information, see "Configuring the cluster network range". For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the status.noProxy parameter is also populated with the instance metadata endpoint, 169.254.169.254 . Example of values added to the status: segment of a Proxy object by RHCOS apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster # ... networking: clusterNetwork: 1 - cidr: <ip_address_from_cidr> hostPrefix: 23 network type: OVNKubernetes machineNetwork: 2 - cidr: <ip_address_from_cidr> serviceNetwork: 3 - 172.30.0.0/16 # ... status: noProxy: - localhost - .cluster.local - .svc - 127.0.0.1 - <api_server_internal_url> 4 # ... 1 Specify IP address blocks from which pod IP addresses are allocated. The default value is 10.128.0.0/14 with a host prefix of /23 . 2 Specify the IP address blocks for machines. The default value is 10.0.0.0/16 . 3 Specify IP address block for services. The default value is 172.30.0.0/16 . 4 You can find the URL of the internal API server by running the oc get infrastructures.config.openshift.io cluster -o jsonpath='{.status.etcdDiscoveryDomain}' command. Important If your installation type does not include setting the networking.machineNetwork[].cidr field, you must include the machine IP addresses manually in the .status.noProxy field to make sure that the traffic between nodes can bypass the proxy. 30.1. Prerequisites Review the sites that your cluster requires access to and determine whether any of them must bypass the proxy. By default, all cluster system egress traffic is proxied, including calls to the cloud provider API for the cloud that hosts your cluster. The system-wide proxy affects system components only, not user workloads. If necessary, add sites to the spec.noProxy parameter of the Proxy object to bypass the proxy. 30.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Warning Enabling the cluster-wide proxy causes the Machine Config Operator (MCO) to trigger node reboot. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses (or other network CIDRs), and port numbers to exclude proxying. Note Port numbers are only supported when configuring IPv6 addresses. Port numbers are not supported when configuring IPv4 addresses. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 30.3. Removing the cluster-wide proxy The cluster Proxy object cannot be deleted. To remove the proxy from a cluster, remove all spec fields from the Proxy object. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Use the oc edit command to modify the proxy: USD oc edit proxy/cluster Remove all spec fields from the Proxy object. For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {} Save the file to apply the changes. 30.4. Verifying the cluster-wide proxy configuration After the cluster-wide proxy configuration is deployed, you can verify that it is working as expected. Follow these steps to check the logs and validate the implementation. Prerequisites You have cluster administrator permissions. You have the OpenShift Container Platform oc CLI tool installed. Procedure Check the proxy configuration status using the oc command: USD oc get proxy/cluster -o yaml Verify the proxy fields in the output to ensure they match your configuration. Specifically, check the spec.httpProxy , spec.httpsProxy , spec.noProxy , and spec.trustedCA fields. Inspect the status of the Proxy object: USD oc get proxy/cluster -o jsonpath='{.status}' Example output { status: httpProxy: http://user:xxx@xxxx:3128 httpsProxy: http://user:xxx@xxxx:3128 noProxy: .cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,localhost,test.no-proxy.com } Check the logs of the Machine Config Operator (MCO) to ensure that the configuration changes were applied successfully: USD oc logs -n openshift-machine-config-operator USD(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name) Look for messages that indicate the proxy settings were applied and the nodes were rebooted if necessary. Verify that system components are using the proxy by checking the logs of a component that makes external requests, such as the Cluster Version Operator (CVO): USD oc logs -n openshift-cluster-version USD(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name) Look for log entries that show that external requests have been routed through the proxy. Additional resources Configuring the cluster network range Understanding the CA Bundle certificate Proxy certificates How is the cluster-wide proxy setting applied to OpenShift Container Platform nodes? | [
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster networking: clusterNetwork: 1 - cidr: <ip_address_from_cidr> hostPrefix: 23 network type: OVNKubernetes machineNetwork: 2 - cidr: <ip_address_from_cidr> serviceNetwork: 3 - 172.30.0.0/16 status: noProxy: - localhost - .cluster.local - .svc - 127.0.0.1 - <api_server_internal_url> 4",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:",
"apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4",
"oc create -f user-ca-bundle.yaml",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5",
"oc edit proxy/cluster",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}",
"oc get proxy/cluster -o yaml",
"oc get proxy/cluster -o jsonpath='{.status}'",
"{ status: httpProxy: http://user:xxx@xxxx:3128 httpsProxy: http://user:xxx@xxxx:3128 noProxy: .cluster.local,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,localhost,test.no-proxy.com }",
"oc logs -n openshift-machine-config-operator USD(oc get pods -n openshift-machine-config-operator -l k8s-app=machine-config-operator -o name)",
"oc logs -n openshift-cluster-version USD(oc get pods -n openshift-cluster-version -l k8s-app=machine-config-operator -o name)"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/networking/enable-cluster-wide-proxy |
Chapter 3. Customizing workspace components | Chapter 3. Customizing workspace components To customize workspace components: Choose a Git repository for your workspace . Use a devfile . Configure an IDE . Add OpenShift Dev Spaces specific attributes in addition to the generic devfile specification. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/user_guide/customizing-workspace-components |
Chapter 6. Tuning resource limits | Chapter 6. Tuning resource limits Directory Server provides several settings to tune the amount of resources an instance uses. You can change them using the command line or the web console. 6.1. Updating resource limit settings using the command line This section provides a general procedure how to change resource limit settings. Adjust the settings according to your environment. Procedure Update the performance settings: # dsconf -D " cn=Directory Manager " ldap://server.example.com config replace parameter_name = value You can set the following parameters: nsslapd-threadnumber : Sets the number of worker threads. nsslapd-maxdescriptors : Sets the maximum number of file descriptors. nsslapd-timelimit : Sets the search time limit. nsslapd-sizelimit : Sets the search size limit. nsslapd-pagedsizelimit : Sets the paged search size limit. nsslapd-idletimeout : Sets the idle connection timeout. nsslapd-ioblocktimeout : Sets the input/output (I/O) block timeout. nsslapd-ndn-cache-enabled : Enables or disables the normalized DN cache. nsslapd-ndn-cache-max-size : Sets the normalized DN cache size, if nsslapd-ndn-cache-enabled is enabled. nsslapd-outbound-ldap-io-timeout : Sets the outbound I/O timeout. nsslapd-maxbersize : Sets the maximum Basic Encoding Rules (BER) size. nsslapd-maxsasliosize : Sets the maximum Simple Authentication and Security Layer (SASL) I/O size. nsslapd-listen-backlog-size : Sets the maximum number of sockets available to receive incoming connections. nsslapd-max-filter-nest-level : Sets the maximum nested filter level. nsslapd-ignore-virtual-attrs : Enables or disables virtual attribute lookups. nsslapd-connection-nocanon : Enables or disables reverse DNS lookups. nsslapd-enable-turbo-mode : Enables or disables the turbo mode feature. For further details, see the descriptions of the parameters in the Configuration and schema reference Restart the instance: # dsctl instance_name restart 6.2. Updating resource limit settings using the web console This section provides a general procedure how to change resource limit settings. Adjust the settings according to your environment. Prerequisites You are logged in to the instance in the web console. Procedure Navigate to Server Tuning & Limits . Update the settings. Optionally, click Show Advanced Settings to display all settings. Click Save Settings . Click Actions Restart Instance . 6.3. Disabling the Transparent Huge Pages feature Transparent Huge Pages (THP) is the memory management feature in Linux that speeds up Translation Lookaside Buffer (TLB) checks on machines with large amounts of memory by using larger memory pages. The THP feature is enabled by default on RHEL systems and supports 2 MB memory pages. The THP feature, however, works best when enabled on large, contiguous allocation patterns and can degrade performance on small, sparse allocation patterns that are typical to the Red Hat Directory Server. The resident memory size of the process might eventually exceed the limit and impact performance or get terminated by the out of memory (OOM) killer. Important To avoid performance and memory consumption problems, disable THP on RHEL systems with Red Hat Directory Server installed. Procedure Check the current status of THP: If the transparent huge pages feature is active, disable it either at boot time or run time: Disable the transparent huge pages at boot time by appending the following to the kernel command line in the grub.conf file: Disable transparent huge pages at run time by running the following commands: Additional resources The negative effects of Transparent Huge Pages (THP) on RHDS Configuring Transparent Huge Pages in RHEL 7 . | [
"dsconf -D \" cn=Directory Manager \" ldap://server.example.com config replace parameter_name = value",
"dsctl instance_name restart",
"cat /sys/kernel/mm/transparent_hugepage/enabled",
"transparent_hugepage=never",
"echo never > /sys/kernel/mm/transparent_hugepage/enabled echo never > /sys/kernel/mm/transparent_hugepage/defrag"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/tuning_the_performance_of_red_hat_directory_server/assembly_tuning-resource-limits_assembly_improving-the-performance-of-views |
Providing feedback on Red Hat build of OpenJDK documentation | Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.10/proc-providing-feedback-on-redhat-documentation |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/troubleshooting_openshift_data_foundation/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 25. Extending a Stratis volume with additional block devices | Chapter 25. Extending a Stratis volume with additional block devices You can attach additional block devices to a Stratis pool to provide more storage capacity for Stratis file systems. You can do it manually or by using the web console. Important Stratis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . 25.1. Adding block devices to a Stratis pool You can add one or more block devices to a Stratis pool. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. The block devices that you are adding to the Stratis pool are not in use and not mounted. The block devices that you are adding to the Stratis pool are at least 1 GiB in size each. Procedure To add one or more block devices to the pool, use: Additional resources stratis(8) man page on your system 25.2. Adding a block device to a Stratis pool by using the web console You can use the web console to add a block device to an existing Stratis pool. You can also add caches as a block device. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The stratisd service is running. A Stratis pool is created. The block devices on which you are creating a Stratis pool are not in use and are not mounted. Each block device on which you are creating a Stratis pool is at least 1 GB. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the Stratis pool to which you want to add a block device. On the Stratis pool page, click Add block devices and select the Tier where you want to add a block device as data or cache. If you are adding the block device to a Stratis pool that is encrypted with a passphrase, enter the passphrase. Under Block devices , select the devices you want to add to the pool. Click Add . | [
"stratis pool add-data my-pool device-1 device-2 device-n"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/extending-a-stratis-volume-with-additional-block-devices |
Chapter 6. Deploying AMQ Streams using installation artifacts | Chapter 6. Deploying AMQ Streams using installation artifacts Having prepared your environment for a deployment of AMQ Streams , you can deploy AMQ Streams to an OpenShift cluster. You can use the deployment files provided with the release artifacts. Use the deployment files to create the Kafka cluster . Optionally, you can deploy the following Kafka components according to your requirements: Kafka Connect Kafka MirrorMaker Kafka Bridge AMQ Streams is based on Strimzi 0.28.x. You can deploy AMQ Streams 2.1 on OpenShift 4.6 to 4.10. Note To run the commands in this guide, your cluster user must have the rights to manage role-based access control (RBAC) and CRDs. 6.1. Create the Kafka cluster To be able to manage a Kafka cluster with the Cluster Operator, you must deploy it as a Kafka resource. AMQ Streams provides example deployment files to do this. You can use these files to deploy the Topic Operator and User Operator at the same time. If you haven't deployed a Kafka cluster as a Kafka resource, you can't use the Cluster Operator to manage it. This applies, for example, to a Kafka cluster running outside of OpenShift. But you can deploy and use the Topic Operator and User Operator as standalone components. Note The Cluster Operator can watch one, multiple, or all namespaces in an OpenShift cluster. The Topic Operator and User Operator watch for KafkaTopics and KafkaUsers in the single namespace of the Kafka cluster deployment. Deploying a Kafka cluster with the Topic Operator and User Operator Perform these deployment steps if you want to use the Topic Operator and User Operator with a Kafka cluster managed by AMQ Streams. Deploy the Cluster Operator Use the Cluster Operator to deploy the: Kafka cluster Topic Operator User Operator Deploying a standalone Topic Operator and User Operator Perform these deployment steps if you want to use the Topic Operator and User Operator with a Kafka cluster that is not managed by AMQ Streams. Deploy the standalone Topic Operator Deploy the standalone User Operator 6.1.1. Deploying the Cluster Operator The Cluster Operator is responsible for deploying and managing Apache Kafka clusters within an OpenShift cluster. The procedures in this section describe how to deploy the Cluster Operator to watch one of the following: A single namespace Multiple namespaces All namespaces 6.1.1.1. Watch options for a Cluster Operator deployment When the Cluster Operator is running, it starts to watch for updates of Kafka resources. You can choose to deploy the Cluster Operator to watch Kafka resources from: A single namespace (the same namespace containing the Cluster Operator) Multiple namespaces All namespaces Note AMQ Streams provides example YAML files to make the deployment process easier. The Cluster Operator watches for changes to the following resources: Kafka for the Kafka cluster. KafkaConnect for the Kafka Connect cluster. KafkaConnector for creating and managing connectors in a Kafka Connect cluster. KafkaMirrorMaker for the Kafka MirrorMaker instance. KafkaMirrorMaker2 for the Kafka MirrorMaker 2.0 instance. KafkaBridge for the Kafka Bridge instance. KafkaRebalance for the Cruise Control optimization requests. When one of these resources is created in the OpenShift cluster, the operator gets the cluster description from the resource and starts creating a new cluster for the resource by creating the necessary OpenShift resources, such as StatefulSets, Services and ConfigMaps. Each time a Kafka resource is updated, the operator performs corresponding updates on the OpenShift resources that make up the cluster for the resource. Resources are either patched or deleted, and then recreated in order to make the cluster for the resource reflect the desired state of the cluster. This operation might cause a rolling update that might lead to service disruption. When a resource is deleted, the operator undeploys the cluster and deletes all related OpenShift resources. 6.1.1.2. Deploying the Cluster Operator to watch a single namespace This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources in a single namespace in your OpenShift cluster. Prerequisites This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions , ClusterRoles and ClusterRoleBindings . Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin . Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace <my_cluster_operator_namespace> . On Linux, use: On MacOS, use: Deploy the Cluster Operator: oc create -f install/cluster-operator -n <my_cluster_operator_namespace> Check the status of the deployment: oc get deployments -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE amq-streams-cluster-operator-<version> 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.1.1.3. Deploying the Cluster Operator to watch multiple namespaces This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across multiple namespaces in your OpenShift cluster. Prerequisites This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions , ClusterRoles and ClusterRoleBindings . Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin . Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace <my_cluster_operator_namespace> . On Linux, use: On MacOS, use: Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to add a list of all the namespaces the Cluster Operator will watch to the STRIMZI_NAMESPACE environment variable. For example, in this procedure the Cluster Operator will watch the namespaces watched-namespace-1 , watched-namespace-2 , watched-namespace-3 . apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel8-operator:2.1.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3 For each namespace listed, install the RoleBindings . In this example, we replace watched-namespace in these commands with the namespaces listed in the step, repeating them for watched-namespace-1 , watched-namespace-2 , watched-namespace-3 : oc create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> oc create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace> Deploy the Cluster Operator: oc create -f install/cluster-operator -n <my_cluster_operator_namespace> Check the status of the deployment: oc get deployments -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE amq-streams-cluster-operator-<version> 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.1.1.4. Deploying the Cluster Operator to watch all namespaces This procedure shows how to deploy the Cluster Operator to watch AMQ Streams resources across all namespaces in your OpenShift cluster. When running in this mode, the Cluster Operator automatically manages clusters in any new namespaces that are created. Prerequisites This procedure requires use of an OpenShift user account which is able to create CustomResourceDefinitions , ClusterRoles and ClusterRoleBindings . Use of Role Base Access Control (RBAC) in the OpenShift cluster usually means that permission to create, edit, and delete these resources is limited to OpenShift cluster administrators, such as system:admin . Procedure Edit the AMQ Streams installation files to use the namespace the Cluster Operator is going to be installed into. For example, in this procedure the Cluster Operator is installed into the namespace <my_cluster_operator_namespace> . On Linux, use: On MacOS, use: Edit the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file to set the value of the STRIMZI_NAMESPACE environment variable to * . apiVersion: apps/v1 kind: Deployment spec: # ... template: spec: # ... serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel8-operator:2.1.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: "*" # ... Create ClusterRoleBindings that grant cluster-wide access for all namespaces to the Cluster Operator. oc create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount <my_cluster_operator_namespace> :strimzi-cluster-operator oc create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount <my_cluster_operator_namespace> :strimzi-cluster-operator Replace <my_cluster_operator_namespace> with the namespace you want to install the Cluster Operator into. Deploy the Cluster Operator to your OpenShift cluster. oc create -f install/cluster-operator -n <my_cluster_operator_namespace> Check the status of the deployment: oc get deployments -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE amq-streams-cluster-operator-<version> 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.1.2. Deploying Kafka Apache Kafka is an open-source distributed publish-subscribe messaging system for fault-tolerant real-time data feeds. The procedures in this section describe the following: How to use the Cluster Operator to deploy: An ephemeral or persistent Kafka cluster The Topic Operator and User Operator by configuring the Kafka custom resource: Topic Operator User Operator Alternative standalone deployment procedures for the Topic Operator and User Operator: Deploy the standalone Topic Operator Deploy the standalone User Operator When installing Kafka, AMQ Streams also installs a ZooKeeper cluster and adds the necessary configuration to connect Kafka with ZooKeeper. 6.1.2.1. Deploying the Kafka cluster This procedure shows how to deploy a Kafka cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a Kafka resource. AMQ Streams provides example configuration files . For a Kafka deployment, the following examples are provided: kafka-persistent.yaml Deploys a persistent cluster with three ZooKeeper and three Kafka nodes. kafka-jbod.yaml Deploys a persistent cluster with three ZooKeeper and three Kafka nodes (each using multiple persistent volumes). kafka-persistent-single.yaml Deploys a persistent cluster with a single ZooKeeper node and a single Kafka node. kafka-ephemeral.yaml Deploys an ephemeral cluster with three ZooKeeper and three Kafka nodes. kafka-ephemeral-single.yaml Deploys an ephemeral cluster with three ZooKeeper nodes and a single Kafka node. In this procedure, we use the examples for an ephemeral and persistent Kafka cluster deployment. Ephemeral cluster In general, an ephemeral (or temporary) Kafka cluster is suitable for development and testing purposes, not for production. This deployment uses emptyDir volumes for storing broker information (for ZooKeeper) and topics or partitions (for Kafka). Using an emptyDir volume means that its content is strictly related to the pod life cycle and is deleted when the pod goes down. Persistent cluster A persistent Kafka cluster uses PersistentVolumes to store ZooKeeper and Kafka data. The PersistentVolume is acquired using a PersistentVolumeClaim to make it independent of the actual type of the PersistentVolume . For example, it can use Amazon EBS volumes in Amazon AWS deployments without any changes in the YAML files. The PersistentVolumeClaim can use a StorageClass to trigger automatic volume provisioning. The example YAML files specify the latest supported Kafka version, and configuration for its supported log message format version and inter-broker protocol version. The inter.broker.protocol.version property for the Kafka config must be the version supported by the specified Kafka version ( spec.kafka.version ). The property represents the version of Kafka protocol used in a Kafka cluster. From Kafka 3.0.0, when the inter.broker.protocol.version is set to 3.0 or higher, the log.message.format.version option is ignored and doesn't need to be set. An update to the inter.broker.protocol.version is required when upgrading Kafka . The example clusters are named my-cluster by default. The cluster name is defined by the name of the resource and cannot be changed after the cluster has been deployed. To change the cluster name before you deploy the cluster, edit the Kafka.metadata.name property of the Kafka resource in the relevant YAML file. Default cluster name and specified Kafka versions apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.1.0 #... config: #... log.message.format.version: "3.1" inter.broker.protocol.version: "3.1" # ... Prerequisites The Cluster Operator must be deployed. Procedure Create and deploy an ephemeral or persistent cluster. For development or testing, you might prefer to use an ephemeral cluster. You can use a persistent cluster in any situation. To create and deploy an ephemeral cluster: oc apply -f examples/kafka/kafka-ephemeral.yaml To create and deploy a persistent cluster: oc apply -f examples/kafka/kafka-persistent.yaml Check the status of the deployment: oc get pods -n <my_cluster_operator_namespace> Output shows the pod names and readiness NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0 my-cluster is the name of the Kafka cluster. With the default deployment, you install an Entity Operator cluster, 3 Kafka pods, and 3 ZooKeeper pods. READY shows the number of replicas that are ready/expected. The deployment is successful when the STATUS shows as Running . Additional resources Kafka cluster configuration 6.1.2.2. Deploying the Topic Operator using the Cluster Operator This procedure describes how to deploy the Topic Operator using the Cluster Operator. You configure the entityOperator property of the Kafka resource to include the topicOperator . By default, the Topic Operator watches for KafkaTopics in the namespace of the Kafka cluster deployment. If you want to use the Topic Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the Topic Operator as a standalone component . For more information about configuring the entityOperator and topicOperator properties, see Configuring the Entity Operator . Prerequisites The Cluster Operator must be deployed. Procedure Edit the entityOperator properties of the Kafka resource to include topicOperator : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {} Configure the Topic Operator spec using the properties described in EntityTopicOperatorSpec schema reference . Use an empty object ( {} ) if you want all properties to use their default values. Create or update the resource: Use oc apply : oc apply -f <your-file> 6.1.2.3. Deploying the User Operator using the Cluster Operator This procedure describes how to deploy the User Operator using the Cluster Operator. You configure the entityOperator property of the Kafka resource to include the userOperator . By default, the User Operator watches for KafkaUsers in the namespace of the Kafka cluster deployment. If you want to use the User Operator with a Kafka cluster that is not managed by AMQ Streams, you must deploy the User Operator as a standalone component . For more information about configuring the entityOperator and userOperator properties, see Configuring the Entity Operator . Prerequisites The Cluster Operator must be deployed. Procedure Edit the entityOperator properties of the Kafka resource to include userOperator : apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: #... entityOperator: topicOperator: {} userOperator: {} Configure the User Operator spec using the properties described in EntityUserOperatorSpec schema reference . Use an empty object ( {} ) if you want all properties to use their default values. Create or update the resource: oc apply -f <your-file> 6.1.3. Alternative standalone deployment options for AMQ Streams Operators You can perform a standalone deployment of the Topic Operator and User Operator. Consider a standalone deployment of these operators if you are using a Kafka cluster that is not managed by the Cluster Operator. You deploy the operators to OpenShift. Kafka can be running outside of OpenShift. For example, you might be using a Kafka as a managed service. You adjust the deployment configuration for the standalone operator to match the address of your Kafka cluster. 6.1.3.1. Deploying the standalone Topic Operator This procedure shows how to deploy the Topic Operator as a standalone component for topic management. You can use a standalone Topic Operator with a Kafka cluster that is not managed by the Cluster Operator. A standalone deployment can operate with any Kafka cluster. Standalone deployment files are provided with AMQ Streams. Use the 05-Deployment-strimzi-topic-operator.yaml deployment file to deploy the Topic Operator. Add or set the environment variables needed to make a connection to a Kafka cluster. Prerequisites You are running a Kafka cluster for the Topic Operator to connect to. As long as the standalone Topic Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service. Procedure Edit the env properties in the install/topic-operator/05-Deployment-strimzi-topic-operator.yaml standalone deployment file. Example standalone Topic Operator deployment configuration apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-topic-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_ZOOKEEPER_CONNECT 4 value: my-cluster-zookeeper-client:2181 - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 5 value: "18000" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: "120000" - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 7 value: "6" - name: STRIMZI_LOG_LEVEL 8 value: INFO - name: STRIMZI_TLS_ENABLED 9 value: "false" - name: STRIMZI_JAVA_OPTS 10 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 11 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_PUBLIC_CA 12 value: "false" - name: STRIMZI_TLS_AUTH_ENABLED 13 value: "false" - name: STRIMZI_SASL_ENABLED 14 value: "false" - name: STRIMZI_SASL_USERNAME 15 value: "admin" - name: STRIMZI_SASL_PASSWORD 16 value: "password" - name: STRIMZI_SASL_MECHANISM 17 value: "scram-sha-512" - name: STRIMZI_SECURITY_PROTOCOL 18 value: "SSL" 1 The OpenShift namespace for the Topic Operator to watch for KafkaTopic resources. Specify the namespace of the Kafka cluster. 2 The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down. 3 The label selector to identify the KafkaTopic resources managed by the Topic Operator. 4 The host and port pair of the address to connect to the ZooKeeper cluster. This must be the same ZooKeeper cluster that your Kafka cluster is using. 5 The ZooKeeper session timeout, in milliseconds. The default is 18000 (18 seconds). 6 The interval between periodic reconciliations, in milliseconds. The default is 120000 (2 minutes). 7 The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential backoff. Consider increasing this value when topic creation takes more time due to the number of partitions or replicas. The default is 6 attempts. 8 The level for printing logging messages. You can set the level to ERROR , WARNING , INFO , DEBUG , or TRACE . 9 Enables TLS support for encrypted communication with the Kafka brokers. 10 (Optional) The Java options used by the JVM running the Topic Operator. 11 (Optional) The debugging ( -D ) options set for the Topic Operator. 12 (Optional) Skips the generation of trust store certificates if TLS is enabled through STRIMZI_TLS_ENABLED . If this environment variable is enabled, the brokers must use a public trusted certificate authority for their TLS certificates. The default is false . 13 (Optional) Generates key store certificates for mutual TLS authentication. Setting this to false disables client authentication with TLS to the Kafka brokers. The default is true . 14 (Optional) Enables SASL support for client authentication when connecting to Kafka brokers. The default is false . 15 (Optional) The SASL username for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED . 16 (Optional) The SASL password for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED . 17 (Optional) The SASL mechanism for client authentication. Mandatory only if SASL is enabled through STRIMZI_SASL_ENABLED . You can set the value to plain , scram-sha-256 , or scram-sha-512 . 18 (Optional) The security protocol used for communication with Kafka brokers. The default value is "PLAINTEXT". You can set the value to PLAINTEXT , SSL , SASL_PLAINTEXT , or SASL_SSL . If you want to connect to Kafka brokers that are using certificates from a public certificate authority, set STRIMZI_PUBLIC_CA to true . Set this property to true , for example, if you are using Amazon AWS MSK service. If you enabled TLS with the STRIMZI_TLS_ENABLED environment variable, specify the keystore and truststore used to authenticate connection to the Kafka cluster. Example TLS configuration # .... env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: "/path/to/truststore.p12" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: " TRUSTSTORE-PASSWORD " - name: STRIMZI_KEYSTORE_LOCATION 3 value: "/path/to/keystore.p12" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: " KEYSTORE-PASSWORD " # ... 1 The truststore contains the public keys of the Certificate Authorities used to sign the Kafka and ZooKeeper server certificates. 2 The password for accessing the truststore. 3 The keystore contains the private key for TLS client authentication. 4 The password for accessing the keystore. Deploy the Topic Operator. oc create -f install/topic-operator Check the status of the deployment: oc get deployments Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.1.3.2. Deploying the standalone User Operator This procedure shows how to deploy the User Operator as a standalone component for user management. You can use a standalone User Operator with a Kafka cluster that is not managed by the Cluster Operator. A standalone deployment can operate with any Kafka cluster. Standalone deployment files are provided with AMQ Streams. Use the 05-Deployment-strimzi-user-operator.yaml deployment file to deploy the User Operator. Add or set the environment variables needed to make a connection to a Kafka cluster. Prerequisites You are running a Kafka cluster for the User Operator to connect to. As long as the standalone User Operator is correctly configured for connection, the Kafka cluster can be running on a bare-metal environment, a virtual machine, or as a managed cloud application service. Procedure Edit the following env properties in the install/user-operator/05-Deployment-strimzi-user-operator.yaml standalone deployment file. Example standalone User Operator deployment configuration apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # ... template: # ... spec: # ... containers: - name: strimzi-user-operator # ... env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: "strimzi.io/cluster=my-cluster" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: "120000" - name: STRIMZI_LOG_LEVEL 7 value: INFO - name: STRIMZI_GC_LOG_ENABLED 8 value: "true" - name: STRIMZI_CA_VALIDITY 9 value: "365" - name: STRIMZI_CA_RENEWAL 10 value: "30" - name: STRIMZI_JAVA_OPTS 11 value: "-Xmx=512M -Xms=256M" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 12 value: "-Djavax.net.debug=verbose -DpropertyName=value" - name: STRIMZI_SECRET_PREFIX 13 value: "kafka-" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 14 value: "true" 1 The OpenShift namespace for the User Operator to watch for KafkaUser resources. Only one namespace can be specified. 2 The host and port pair of the bootstrap broker address to discover and connect to all brokers in the Kafka cluster. Use a comma-separated list to specify two or three broker addresses in case a server is down. 3 The OpenShift Secret that contains the public key ( ca.crt ) value of the Certificate Authority that signs new user certificates for TLS client authentication. 4 The OpenShift Secret that contains the private key ( ca.key ) value of the Certificate Authority that signs new user certificates for TLS client authentication. 5 The label selector used to identify the KafkaUser resources managed by the User Operator. 6 The interval between periodic reconciliations, in milliseconds. The default is 120000 (2 minutes). 7 The level for printing logging messages. You can set the level to ERROR , WARNING , INFO , DEBUG , or TRACE . 8 Enables garbage collection (GC) logging. The default is true . 9 The validity period for the Certificate Authority. The default is 365 days. 10 The renewal period for the Certificate Authority. The renewal period is measured backwards from the expiry date of the current certificate. The default is 30 days to initiate certificate renewal before the old certificates expire. 11 (Optional) The Java options used by the JVM running the User Operator 12 (Optional) The debugging ( -D ) options set for the User Operator 13 (Optional) Prefix for the names of OpenShift secrets created by the User Operator. 14 (Optional) Indicates whether the Kafka cluster supports management of authorization ACL rules using the Kafka Admin API. When set to false , the User Operator will reject all resources with simple authorization ACL rules. This helps to avoid unnecessary exceptions in the Kafka cluster logs. The default is true . If you are using TLS to connect to the Kafka cluster, specify the secrets used to authenticate connection. Otherwise, go to the step. Example TLS configuration # .... env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs # ..." 1 The OpenShift Secret that contains the public key ( ca.crt ) value of the Certificate Authority that signs Kafka broker certificates for TLS client authentication. 2 The OpenShift Secret that contains the keystore ( entity-operator.p12 ) with the private key and certificate for TLS authentication against the Kafka cluster. The Secret must also contain the password ( entity-operator.password ) for accessing the keystore. Deploy the User Operator. oc create -f install/user-operator Check the status of the deployment: oc get deployments Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1 READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . 6.2. Deploy Kafka Connect Kafka Connect is a tool for streaming data between Apache Kafka and external systems. In AMQ Streams, Kafka Connect is deployed in distributed mode. Kafka Connect can also work in standalone mode, but this is not supported by AMQ Streams. Using the concept of connectors , Kafka Connect provides a framework for moving large amounts of data into and out of your Kafka cluster while maintaining scalability and reliability. Kafka Connect is typically used to integrate Kafka with external databases and storage and messaging systems. The procedures in this section show how to: Deploy a Kafka Connect cluster using a KafkaConnect resource Run multiple Kafka Connect instances Create a Kafka Connect image containing the connectors you need to make your connection Create and manage connectors using a KafkaConnector resource or the Kafka Connect REST API Deploy a KafkaConnector resource to Kafka Connect Note The term connector is used interchangeably to mean a connector instance running within a Kafka Connect cluster, or a connector class. In this guide, the term connector is used when the meaning is clear from the context. 6.2.1. Deploying Kafka Connect to your OpenShift cluster This procedure shows how to deploy a Kafka Connect cluster to your OpenShift cluster using the Cluster Operator. A Kafka Connect cluster is implemented as a Deployment with a configurable number of nodes (also called workers ) that distribute the workload of connectors as tasks so that the message flow is highly scalable and reliable. The deployment uses a YAML file to provide the specification to create a KafkaConnect resource. AMQ Streams provides example configuration files . In this procedure, we use the following example file: examples/connect/kafka-connect.yaml Prerequisites The Cluster Operator must be deployed. Running Kafka cluster. Procedure Deploy Kafka Connect to your OpenShift cluster. Use the examples/connect/kafka-connect.yaml file to deploy Kafka Connect. oc apply -f examples/connect/kafka-connect.yaml Check the status of the deployment: oc get deployments -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE my-connect-cluster-connect 1/1 1 1 my-connect-cluster is the name of the Kafka Connect cluster. READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . Additional resources Kafka Connect cluster configuration 6.2.2. Kafka Connect configuration for multiple instances If you are running multiple instances of Kafka Connect, you have to change the default configuration of the following config properties: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: connect-cluster 1 offset.storage.topic: connect-cluster-offsets 2 config.storage.topic: connect-cluster-configs 3 status.storage.topic: connect-cluster-status 4 # ... # ... 1 The Kafka Connect cluster ID within Kafka. 2 Kafka topic that stores connector offsets. 3 Kafka topic that stores connector and task status configurations. 4 Kafka topic that stores connector and task status updates. Note Values for the three topics must be the same for all Kafka Connect instances with the same group.id . Unless you change the default settings, each Kafka Connect instance connecting to the same Kafka cluster is deployed with the same values. What happens, in effect, is all instances are coupled to run in a cluster and use the same topics. If multiple Kafka Connect clusters try to use the same topics, Kafka Connect will not work as expected and generate errors. If you wish to run multiple Kafka Connect instances, change the values of these properties for each instance. 6.2.3. Extending Kafka Connect with connector plugins The AMQ Streams container images for Kafka Connect include two built-in file connectors for moving file-based data into and out of your Kafka cluster. Table 6.1. File connectors File Connector Description FileStreamSourceConnector Transfers data to your Kafka cluster from a file (the source). FileStreamSinkConnector Transfers data from your Kafka cluster to a file (the sink). The procedures in this section show how to add your own connector classes to connector images by: Creating a new container image automatically using AMQ Streams Creating a container image from the Kafka Connect base image (manually or using continuous integration) Important You create the configuration for connectors directly using the Kafka Connect REST API or KafkaConnector custom resources . 6.2.3.1. Creating a new container image automatically using AMQ Streams This procedure shows how to configure Kafka Connect so that AMQ Streams automatically builds a new container image with additional connectors. You define the connector plugins using the .spec.build.plugins property of the KafkaConnect custom resource. AMQ Streams will automatically download and add the connector plugins into a new container image. The container is pushed into the container repository specified in .spec.build.output and automatically used in the Kafka Connect deployment. Prerequisites The Cluster Operator must be deployed. A container registry. You need to provide your own container registry where images can be pushed to, stored, and pulled from. AMQ Streams supports private container registries as well as public registries such as Quay or Docker Hub . Procedure Configure the KafkaConnect custom resource by specifying the container registry in .spec.build.output , and additional connectors in .spec.build.plugins : apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.1.Final/debezium-connector-postgres-1.3.1.Final-plugin.tar.gz sha512sum: 962a12151bdf9a5a30627eebac739955a4fd95a08d373b86bdcea2b4d0c27dd6e1edd5cb548045e115e33a9e69b1b2a352bee24df035a0447cb820077af00c03 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.7.0/camel-telegram-kafka-connector-0.7.0-package.tar.gz sha512sum: a9b1ac63e3284bea7836d7d24d84208c49cdf5600070e6bd1535de654f6920b74ad950d51733e8020bf4187870699819f54ef5859c7846ee4081507f48873479 #... 1 The specification for the Kafka Connect cluster . 2 (Required) Configuration of the container registry where new images are pushed. 3 (Required) List of connector plugins and their artifacts to add to the new container image. Each plugin must be configured with at least one artifact . Create or update the resource: Wait for the new container image to build, and for the Kafka Connect cluster to be deployed. Use the Kafka Connect REST API or the KafkaConnector custom resources to use the connector plugins you added. Additional resources See the Using Strimzi guide for more information on: Kafka Connect Build schema reference 6.2.3.2. Creating a Docker image from the Kafka Connect base image This procedure shows how to create a custom image and add it to the /opt/kafka/plugins directory. You can use the Kafka container image on Red Hat Ecosystem Catalog as a base image for creating your own custom image with additional connector plugins. At startup, the AMQ Streams version of Kafka Connect loads any third-party connector plugins contained in the /opt/kafka/plugins directory. Prerequisites The Cluster Operator must be deployed. Procedure Create a new Dockerfile using registry.redhat.io/amq7/amq-streams-kafka-31-rhel8:2.1.0 as the base image: Example plug-in file Note This example uses the Debezium connectors for MongoDB, MySQL, and PostgreSQL. Debezium running in Kafka Connect looks the same as any other Kafka Connect task. Build the container image. Push your custom image to your container registry. Point to the new container image. You can either: Edit the KafkaConnect.spec.image property of the KafkaConnect custom resource. If set, this property overrides the STRIMZI_KAFKA_CONNECT_IMAGES variable in the Cluster Operator. apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 #... image: my-new-container-image 2 config: 3 #... 1 The specification for the Kafka Connect cluster . 2 The docker image for the pods. 3 Configuration of the Kafka Connect workers (not connectors). or In the install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml file, edit the STRIMZI_KAFKA_CONNECT_IMAGES variable to point to the new container image, and then reinstall the Cluster Operator. Additional resources Container image configuration and the KafkaConnect.spec.image property Cluster Operator configuration and the STRIMZI_KAFKA_CONNECT_IMAGES variable 6.2.4. Creating and managing connectors When you have created a container image for your connector plug-in, you need to create a connector instance in your Kafka Connect cluster. You can then configure, monitor, and manage a running connector instance. A connector is an instance of a particular connector class that knows how to communicate with the relevant external system in terms of messages. Connectors are available for many external systems, or you can create your own. You can create source and sink types of connector. Source connector A source connector is a runtime entity that fetches data from an external system and feeds it to Kafka as messages. Sink connector A sink connector is a runtime entity that fetches messages from Kafka topics and feeds them to an external system. 6.2.4.1. APIs for creating and managing connectors AMQ Streams provides two APIs for creating and managing connectors: KafkaConnector custom resources (referred to as KafkaConnectors) The Kafka Connect REST API Using the APIs, you can: Check the status of a connector instance Reconfigure a running connector Increase or decrease the number of connector tasks for a connector instance Restart connectors Restart connector tasks, including failed tasks Pause a connector instance Resume a previously paused connector instance Delete a connector instance KafkaConnector custom resources KafkaConnectors allow you to create and manage connector instances for Kafka Connect in an OpenShift-native way, so an HTTP client such as cURL is not required. Like other Kafka resources, you declare a connector's desired state in a KafkaConnector YAML file that is deployed to your OpenShift cluster to create the connector instance. KafkaConnector resources must be deployed to the same namespace as the Kafka Connect cluster they link to. You manage a running connector instance by updating its corresponding KafkaConnector resource, and then applying the updates. You remove a connector by deleting its corresponding KafkaConnector . To ensure compatibility with earlier versions of AMQ Streams, KafkaConnectors are disabled by default. To enable KafkaConnectors for a Kafka Connect cluster, you set the strimzi.io/use-connector-resources annotation to true in the KafkaConnect resource. For instructions, see Configuring Kafka Connect . When KafkaConnectors are enabled, the Cluster Operator begins to watch for them. It updates the configurations of running connector instances to match the configurations defined in their KafkaConnectors. AMQ Streams provides an example KafkaConnector configuration file, which you can use to create and manage a FileStreamSourceConnector and a FileStreamSinkConnector . Note You can restart a connector or restart a connector task by annotating a KafkaConnector resource. Kafka Connect API The operations supported by the Kafka Connect REST API are described in the Apache Kafka Connect API documentation . Switching from using the Kafka Connect API to using KafkaConnectors You can switch from using the Kafka Connect API to using KafkaConnectors to manage your connectors. To make the switch, do the following in the order shown: Deploy KafkaConnector resources with the configuration to create your connector instances. Enable KafkaConnectors in your Kafka Connect configuration by setting the strimzi.io/use-connector-resources annotation to true . Warning If you enable KafkaConnectors before creating the resources, you will delete all your connectors. To switch from using KafkaConnectors to using the Kafka Connect API, first remove the annotation that enables the KafkaConnectors from your Kafka Connect configuration. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. 6.2.4.2. Deploying example KafkaConnector resources Use KafkaConnectors with Kafka Connect to stream data to and from a Kafka cluster. AMQ Streams provides example configuration files . In this procedure, we use the following example file: examples/connect/source-connector.yaml . The file is used to create the following connector instances: A FileStreamSourceConnector instance that reads each line from the Kafka license file (the source) and writes the data as messages to a single Kafka topic. A FileStreamSinkConnector instance that reads messages from the Kafka topic and writes the messages to a temporary file (the sink). Note In a production environment, you prepare container images containing your desired Kafka Connect connectors, as described in Section 6.2.3, "Extending Kafka Connect with connector plugins" . The FileStreamSourceConnector and FileStreamSinkConnector are provided as examples. Running these connectors in containers as described here is unlikely to be suitable for production use cases. Prerequisites A Kafka Connect deployment KafkaConnectors are enabled in the Kafka Connect deployment The Cluster Operator is running Procedure Edit the examples/connect/source-connector.yaml file: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 config: 5 file: "/opt/kafka/LICENSE" 6 topic: my-topic 7 # ... 1 Name of the KafkaConnector resource, which is used as the name of the connector. Use any name that is valid for an OpenShift resource. 2 Name of the Kafka Connect cluster to create the connector instance in. Connectors must be deployed to the same namespace as the Kafka Connect cluster they link to. 3 Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster. 4 Maximum number of Kafka Connect Tasks that the connector can create. 5 Connector configuration as key-value pairs. 6 This example source connector configuration reads data from the /opt/kafka/LICENSE file. 7 Kafka topic to publish the source data to. Create the source KafkaConnector in your OpenShift cluster: oc apply -f examples/connect/source-connector.yaml Create an examples/connect/sink-connector.yaml file: touch examples/connect/sink-connector.yaml Paste the following YAML into the sink-connector.yaml file: apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: "/tmp/my-file" 3 topics: my-topic 4 1 Full name or alias of the connector class. This should be present in the image being used by the Kafka Connect cluster. 2 Connector configuration as key-value pairs. 3 Temporary file to publish the source data to. 4 Kafka topic to read the source data from. Create the sink KafkaConnector in your OpenShift cluster: oc apply -f examples/connect/sink-connector.yaml Check that the connector resources were created: oc get kctr --selector strimzi.io/cluster= MY-CONNECT-CLUSTER -o name my-source-connector my-sink-connector Replace MY-CONNECT-CLUSTER with your Kafka Connect cluster. In the container, execute kafka-console-consumer.sh to read the messages that were written to the topic by the source connector: oc exec MY-CLUSTER -kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server MY-CLUSTER -kafka-bootstrap. NAMESPACE .svc:9092 --topic my-topic --from-beginning Source and sink connector configuration options The connector configuration is defined in the spec.config property of the KafkaConnector resource. The FileStreamSourceConnector and FileStreamSinkConnector classes support the same configuration options as the Kafka Connect REST API. Other connectors support different configuration options. Table 6.2. Configuration options for the FileStreamSource connector class Name Type Default value Description file String Null Source file to write messages to. If not specified, the standard input is used. topic List Null The Kafka topic to publish data to. Table 6.3. Configuration options for FileStreamSinkConnector class Name Type Default value Description file String Null Destination file to write messages to. If not specified, the standard output is used. topics List Null One or more Kafka topics to read data from. topics.regex String Null A regular expression matching one or more Kafka topics to read data from. 6.2.4.3. Performing a restart of a Kafka connector This procedure describes how to manually trigger a restart of a Kafka connector by using an OpenShift annotation. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the Kafka connector you want to restart: oc get KafkaConnector To restart the connector, annotate the KafkaConnector resource in OpenShift. For example, using oc annotate : oc annotate KafkaConnector KAFKACONNECTOR-NAME strimzi.io/restart=true Wait for the reconciliation to occur (every two minutes by default). The Kafka connector is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the KafkaConnector custom resource. 6.2.4.4. Performing a restart of a Kafka connector task This procedure describes how to manually trigger a restart of a Kafka connector task by using an OpenShift annotation. Prerequisites The Cluster Operator is running. Procedure Find the name of the KafkaConnector custom resource that controls the Kafka connector task you want to restart: oc get KafkaConnector Find the ID of the task to be restarted from the KafkaConnector custom resource. Task IDs are non-negative integers, starting from 0. oc describe KafkaConnector KAFKACONNECTOR-NAME To restart the connector task, annotate the KafkaConnector resource in OpenShift. For example, using oc annotate to restart task 0: oc annotate KafkaConnector KAFKACONNECTOR-NAME strimzi.io/restart-task=0 Wait for the reconciliation to occur (every two minutes by default). The Kafka connector task is restarted, as long as the annotation was detected by the reconciliation process. When Kafka Connect accepts the restart request, the annotation is removed from the KafkaConnector custom resource. 6.2.4.5. Exposing the Kafka Connect API Use the Kafka Connect REST API as an alternative to using KafkaConnector resources to manage connectors. The Kafka Connect REST API is available as a service running on <connect_cluster_name> -connect-api:8083 , where <connect_cluster_name> is the name of your Kafka Connect cluster. The service is created when you create a Kafka Connect instance. Note The strimzi.io/use-connector-resources annotation enables KafkaConnectors. If you applied the annotation to your KafkaConnect resource configuration, you need to remove it to use the Kafka Connect API. Otherwise, manual changes made directly using the Kafka Connect REST API are reverted by the Cluster Operator. You can add the connector configuration as a JSON object. Example curl request to add connector configuration curl -X POST \ http://my-connect-cluster-connect-api:8083/connectors \ -H 'Content-Type: application/json' \ -d '{ "name": "my-source-connector", "config": { "connector.class":"org.apache.kafka.connect.file.FileStreamSourceConnector", "file": "/opt/kafka/LICENSE", "topic":"my-topic", "tasksMax": "4", "type": "source" } }' The API is only accessible within the OpenShift cluster. If you want to make the Kafka Connect API accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features: LoadBalancer or NodePort type services Ingress resources OpenShift routes Note The connection is insecure, so allow external access advisedly. If you decide to create services, use the labels from the selector of the <connect_cluster_name> -connect-api service to configure the pods to which the service will route the traffic: Selector configuration for the service # ... selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #... 1 Name of the Kafka Connect custom resource in your OpenShift cluster. 2 Name of the Kafka Connect deployment created by the Cluster Operator. You must also create a NetworkPolicy that allows HTTP requests from external clients. Example NetworkPolicy to allow requests to the Kafka Connect API apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-custom-connect-network-policy spec: ingress: - from: - podSelector: 1 matchLabels: app: my-connector-manager ports: - port: 8083 protocol: TCP podSelector: matchLabels: strimzi.io/cluster: my-connect-cluster strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect policyTypes: - Ingress 1 The label of the pod that is allowed to connect to the API. To add the connector configuration outside the cluster, use the URL of the resource that exposes the API in the curl command. 6.3. Deploy Kafka MirrorMaker The Cluster Operator deploys one or more Kafka MirrorMaker replicas to replicate data between Kafka clusters. This process is called mirroring to avoid confusion with the Kafka partitions replication concept. MirrorMaker consumes messages from the source cluster and republishes those messages to the target cluster. 6.3.1. Deploying Kafka MirrorMaker to your OpenShift cluster This procedure shows how to deploy a Kafka MirrorMaker cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a KafkaMirrorMaker or KafkaMirrorMaker2 resource depending on the version of MirrorMaker deployed. Important Kafka MirrorMaker 1 (referred to as just MirrorMaker in the documentation) has been deprecated in Apache Kafka 3.0.0 and will be removed in Apache Kafka 4.0.0. As a result, the KafkaMirrorMaker custom resource which is used to deploy Kafka MirrorMaker 1 has been deprecated in AMQ Streams as well. The KafkaMirrorMaker resource will be removed from AMQ Streams when we adopt Apache Kafka 4.0.0. As a replacement, use the KafkaMirrorMaker2 custom resource with the IdentityReplicationPolicy . AMQ Streams provides example configuration files . In this procedure, we use the following example files: examples/mirror-maker/kafka-mirror-maker.yaml examples/mirror-maker/kafka-mirror-maker-2.yaml Prerequisites The Cluster Operator must be deployed. Procedure Deploy Kafka MirrorMaker to your OpenShift cluster: For MirrorMaker: oc apply -f examples/mirror-maker/kafka-mirror-maker.yaml For MirrorMaker 2.0: oc apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml Check the status of the deployment: oc get deployments -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE my-mirror-maker-mirror-maker 1/1 1 1 my-mm2-cluster-mirrormaker2 1/1 1 1 my-mirror-maker is the name of the Kafka MirrorMaker cluster. my-mm2-cluster is the name of the Kafka MirrorMaker 2.0 cluster. READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . Additional resources Kafka MirrorMaker cluster configuration 6.4. Deploy Kafka Bridge The Cluster Operator deploys one or more Kafka bridge replicas to send data between Kafka clusters and clients via HTTP API. 6.4.1. Deploying Kafka Bridge to your OpenShift cluster This procedure shows how to deploy a Kafka Bridge cluster to your OpenShift cluster using the Cluster Operator. The deployment uses a YAML file to provide the specification to create a KafkaBridge resource. AMQ Streams provides example configuration files . In this procedure, we use the following example file: examples/bridge/kafka-bridge.yaml Prerequisites The Cluster Operator must be deployed. Procedure Deploy Kafka Bridge to your OpenShift cluster: oc apply -f examples/bridge/kafka-bridge.yaml Check the status of the deployment: oc get deployments -n <my_cluster_operator_namespace> Output shows the deployment name and readiness NAME READY UP-TO-DATE AVAILABLE my-bridge-bridge 1/1 1 1 my-bridge is the name of the Kafka Bridge cluster. READY shows the number of replicas that are ready/expected. The deployment is successful when the AVAILABLE output shows 1 . Additional resources Kafka Bridge cluster configuration Using the AMQ Streams Kafka Bridge 6.4.2. Exposing the Kafka Bridge service to your local machine Use port forwarding to expose the AMQ Streams Kafka Bridge service to your local machine on http://localhost:8080 . Note Port forwarding is only suitable for development and testing purposes. Procedure List the names of the pods in your OpenShift cluster: oc get pods -o name pod/kafka-consumer # ... pod/quickstart-bridge-589d78784d-9jcnr pod/strimzi-cluster-operator-76bcf9bc76-8dnfm Connect to the Kafka Bridge pod on port 8080 : oc port-forward pod/quickstart-bridge-589d78784d-9jcnr 8080:8080 & Note If port 8080 on your local machine is already in use, use an alternative HTTP port, such as 8008 . API requests are now forwarded from port 8080 on your local machine to port 8080 in the Kafka Bridge pod. 6.4.3. Accessing the Kafka Bridge outside of OpenShift After deployment, the AMQ Streams Kafka Bridge can only be accessed by applications running in the same OpenShift cluster. These applications use the <kafka_bridge_name> -bridge-service service to access the API. If you want to make the Kafka Bridge accessible to applications running outside of the OpenShift cluster, you can expose it manually by creating one of the following features: LoadBalancer or NodePort type services Ingress resources OpenShift routes If you decide to create Services, use the labels from the selector of the <kafka_bridge_name> -bridge-service service to configure the pods to which the service will route the traffic: # ... selector: strimzi.io/cluster: kafka-bridge-name 1 strimzi.io/kind: KafkaBridge #... 1 Name of the Kafka Bridge custom resource in your OpenShift cluster. | [
"sed -i 's/namespace: .*/namespace: <my_cluster_operator_namespace> /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: <my_cluster_operator_namespace> /' install/cluster-operator/*RoleBinding*.yaml",
"create -f install/cluster-operator -n <my_cluster_operator_namespace>",
"get deployments -n <my_cluster_operator_namespace>",
"NAME READY UP-TO-DATE AVAILABLE amq-streams-cluster-operator-<version> 1/1 1 1",
"sed -i 's/namespace: .*/namespace: <my_cluster_operator_namespace> /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: <my_cluster_operator_namespace> /' install/cluster-operator/*RoleBinding*.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel8-operator:2.1.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: watched-namespace-1,watched-namespace-2,watched-namespace-3",
"create -f install/cluster-operator/020-RoleBinding-strimzi-cluster-operator.yaml -n <watched_namespace> create -f install/cluster-operator/031-RoleBinding-strimzi-cluster-operator-entity-operator-delegation.yaml -n <watched_namespace>",
"create -f install/cluster-operator -n <my_cluster_operator_namespace>",
"get deployments -n <my_cluster_operator_namespace>",
"NAME READY UP-TO-DATE AVAILABLE amq-streams-cluster-operator-<version> 1/1 1 1",
"sed -i 's/namespace: .*/namespace: <my_cluster_operator_namespace> /' install/cluster-operator/*RoleBinding*.yaml",
"sed -i '' 's/namespace: .*/namespace: <my_cluster_operator_namespace> /' install/cluster-operator/*RoleBinding*.yaml",
"apiVersion: apps/v1 kind: Deployment spec: # template: spec: # serviceAccountName: strimzi-cluster-operator containers: - name: strimzi-cluster-operator image: registry.redhat.io/amq7/amq-streams-rhel8-operator:2.1.0 imagePullPolicy: IfNotPresent env: - name: STRIMZI_NAMESPACE value: \"*\" #",
"create clusterrolebinding strimzi-cluster-operator-namespaced --clusterrole=strimzi-cluster-operator-namespaced --serviceaccount <my_cluster_operator_namespace> :strimzi-cluster-operator create clusterrolebinding strimzi-cluster-operator-entity-operator-delegation --clusterrole=strimzi-entity-operator --serviceaccount <my_cluster_operator_namespace> :strimzi-cluster-operator",
"create -f install/cluster-operator -n <my_cluster_operator_namespace>",
"get deployments -n <my_cluster_operator_namespace>",
"NAME READY UP-TO-DATE AVAILABLE amq-streams-cluster-operator-<version> 1/1 1 1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: version: 3.1.0 # config: # log.message.format.version: \"3.1\" inter.broker.protocol.version: \"3.1\" #",
"apply -f examples/kafka/kafka-ephemeral.yaml",
"apply -f examples/kafka/kafka-persistent.yaml",
"get pods -n <my_cluster_operator_namespace>",
"NAME READY STATUS RESTARTS my-cluster-entity-operator 3/3 Running 0 my-cluster-kafka-0 1/1 Running 0 my-cluster-kafka-1 1/1 Running 0 my-cluster-kafka-2 1/1 Running 0 my-cluster-zookeeper-0 1/1 Running 0 my-cluster-zookeeper-1 1/1 Running 0 my-cluster-zookeeper-2 1/1 Running 0",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}",
"apply -f <your-file>",
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: topicOperator: {} userOperator: {}",
"apply -f <your-file>",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-topic-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-topic-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_RESOURCE_LABELS 3 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_ZOOKEEPER_CONNECT 4 value: my-cluster-zookeeper-client:2181 - name: STRIMZI_ZOOKEEPER_SESSION_TIMEOUT_MS 5 value: \"18000\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: \"120000\" - name: STRIMZI_TOPIC_METADATA_MAX_ATTEMPTS 7 value: \"6\" - name: STRIMZI_LOG_LEVEL 8 value: INFO - name: STRIMZI_TLS_ENABLED 9 value: \"false\" - name: STRIMZI_JAVA_OPTS 10 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 11 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_PUBLIC_CA 12 value: \"false\" - name: STRIMZI_TLS_AUTH_ENABLED 13 value: \"false\" - name: STRIMZI_SASL_ENABLED 14 value: \"false\" - name: STRIMZI_SASL_USERNAME 15 value: \"admin\" - name: STRIMZI_SASL_PASSWORD 16 value: \"password\" - name: STRIMZI_SASL_MECHANISM 17 value: \"scram-sha-512\" - name: STRIMZI_SECURITY_PROTOCOL 18 value: \"SSL\"",
". env: - name: STRIMZI_TRUSTSTORE_LOCATION 1 value: \"/path/to/truststore.p12\" - name: STRIMZI_TRUSTSTORE_PASSWORD 2 value: \" TRUSTSTORE-PASSWORD \" - name: STRIMZI_KEYSTORE_LOCATION 3 value: \"/path/to/keystore.p12\" - name: STRIMZI_KEYSTORE_PASSWORD 4 value: \" KEYSTORE-PASSWORD \"",
"create -f install/topic-operator",
"get deployments",
"NAME READY UP-TO-DATE AVAILABLE strimzi-topic-operator 1/1 1 1",
"apiVersion: apps/v1 kind: Deployment metadata: name: strimzi-user-operator labels: app: strimzi spec: # template: # spec: # containers: - name: strimzi-user-operator # env: - name: STRIMZI_NAMESPACE 1 valueFrom: fieldRef: fieldPath: metadata.namespace - name: STRIMZI_KAFKA_BOOTSTRAP_SERVERS 2 value: my-kafka-bootstrap-address:9092 - name: STRIMZI_CA_CERT_NAME 3 value: my-cluster-clients-ca-cert - name: STRIMZI_CA_KEY_NAME 4 value: my-cluster-clients-ca - name: STRIMZI_LABELS 5 value: \"strimzi.io/cluster=my-cluster\" - name: STRIMZI_FULL_RECONCILIATION_INTERVAL_MS 6 value: \"120000\" - name: STRIMZI_LOG_LEVEL 7 value: INFO - name: STRIMZI_GC_LOG_ENABLED 8 value: \"true\" - name: STRIMZI_CA_VALIDITY 9 value: \"365\" - name: STRIMZI_CA_RENEWAL 10 value: \"30\" - name: STRIMZI_JAVA_OPTS 11 value: \"-Xmx=512M -Xms=256M\" - name: STRIMZI_JAVA_SYSTEM_PROPERTIES 12 value: \"-Djavax.net.debug=verbose -DpropertyName=value\" - name: STRIMZI_SECRET_PREFIX 13 value: \"kafka-\" - name: STRIMZI_ACLS_ADMIN_API_SUPPORTED 14 value: \"true\"",
". env: - name: STRIMZI_CLUSTER_CA_CERT_SECRET_NAME 1 value: my-cluster-cluster-ca-cert - name: STRIMZI_EO_KEY_SECRET_NAME 2 value: my-cluster-entity-operator-certs ...\"",
"create -f install/user-operator",
"get deployments",
"NAME READY UP-TO-DATE AVAILABLE strimzi-user-operator 1/1 1 1",
"apply -f examples/connect/kafka-connect.yaml",
"get deployments -n <my_cluster_operator_namespace>",
"NAME READY UP-TO-DATE AVAILABLE my-connect-cluster-connect 1/1 1 1",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: connect-cluster 1 offset.storage.topic: connect-cluster-offsets 2 config.storage.topic: connect-cluster-configs 3 status.storage.topic: connect-cluster-status 4 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # build: output: 2 type: docker image: my-registry.io/my-org/my-connect-cluster:latest pushSecret: my-registry-credentials plugins: 3 - name: debezium-postgres-connector artifacts: - type: tgz url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/1.3.1.Final/debezium-connector-postgres-1.3.1.Final-plugin.tar.gz sha512sum: 962a12151bdf9a5a30627eebac739955a4fd95a08d373b86bdcea2b4d0c27dd6e1edd5cb548045e115e33a9e69b1b2a352bee24df035a0447cb820077af00c03 - name: camel-telegram artifacts: - type: tgz url: https://repo.maven.apache.org/maven2/org/apache/camel/kafkaconnector/camel-telegram-kafka-connector/0.7.0/camel-telegram-kafka-connector-0.7.0-package.tar.gz sha512sum: a9b1ac63e3284bea7836d7d24d84208c49cdf5600070e6bd1535de654f6920b74ad950d51733e8020bf4187870699819f54ef5859c7846ee4081507f48873479 #",
"oc apply -f KAFKA-CONNECT-CONFIG-FILE",
"FROM registry.redhat.io/amq7/amq-streams-kafka-31-rhel8:2.1.0 USER root:root COPY ./ my-plugins / /opt/kafka/plugins/ USER 1001",
"tree ./ my-plugins / ./ my-plugins / ├── debezium-connector-mongodb │ ├── bson-3.4.2.jar │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mongodb-0.7.1.jar │ ├── debezium-core-0.7.1.jar │ ├── LICENSE.txt │ ├── mongodb-driver-3.4.2.jar │ ├── mongodb-driver-core-3.4.2.jar │ └── README.md ├── debezium-connector-mysql │ ├── CHANGELOG.md │ ├── CONTRIBUTE.md │ ├── COPYRIGHT.txt │ ├── debezium-connector-mysql-0.7.1.jar │ ├── debezium-core-0.7.1.jar │ ├── LICENSE.txt │ ├── mysql-binlog-connector-java-0.13.0.jar │ ├── mysql-connector-java-5.1.40.jar │ ├── README.md │ └── wkb-1.0.2.jar └── debezium-connector-postgres ├── CHANGELOG.md ├── CONTRIBUTE.md ├── COPYRIGHT.txt ├── debezium-connector-postgres-0.7.1.jar ├── debezium-core-0.7.1.jar ├── LICENSE.txt ├── postgresql-42.0.0.jar ├── protobuf-java-2.6.1.jar └── README.md",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: 1 # image: my-new-container-image 2 config: 3 #",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector 1 labels: strimzi.io/cluster: my-connect-cluster 2 spec: class: org.apache.kafka.connect.file.FileStreamSourceConnector 3 tasksMax: 2 4 config: 5 file: \"/opt/kafka/LICENSE\" 6 topic: my-topic 7 #",
"apply -f examples/connect/source-connector.yaml",
"touch examples/connect/sink-connector.yaml",
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-sink-connector labels: strimzi.io/cluster: my-connect spec: class: org.apache.kafka.connect.file.FileStreamSinkConnector 1 tasksMax: 2 config: 2 file: \"/tmp/my-file\" 3 topics: my-topic 4",
"apply -f examples/connect/sink-connector.yaml",
"get kctr --selector strimzi.io/cluster= MY-CONNECT-CLUSTER -o name my-source-connector my-sink-connector",
"exec MY-CLUSTER -kafka-0 -i -t -- bin/kafka-console-consumer.sh --bootstrap-server MY-CLUSTER -kafka-bootstrap. NAMESPACE .svc:9092 --topic my-topic --from-beginning",
"get KafkaConnector",
"annotate KafkaConnector KAFKACONNECTOR-NAME strimzi.io/restart=true",
"get KafkaConnector",
"describe KafkaConnector KAFKACONNECTOR-NAME",
"annotate KafkaConnector KAFKACONNECTOR-NAME strimzi.io/restart-task=0",
"curl -X POST http://my-connect-cluster-connect-api:8083/connectors -H 'Content-Type: application/json' -d '{ \"name\": \"my-source-connector\", \"config\": { \"connector.class\":\"org.apache.kafka.connect.file.FileStreamSourceConnector\", \"file\": \"/opt/kafka/LICENSE\", \"topic\":\"my-topic\", \"tasksMax\": \"4\", \"type\": \"source\" } }'",
"selector: strimzi.io/cluster: my-connect-cluster 1 strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect 2 #",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: my-custom-connect-network-policy spec: ingress: - from: - podSelector: 1 matchLabels: app: my-connector-manager ports: - port: 8083 protocol: TCP podSelector: matchLabels: strimzi.io/cluster: my-connect-cluster strimzi.io/kind: KafkaConnect strimzi.io/name: my-connect-cluster-connect policyTypes: - Ingress",
"apply -f examples/mirror-maker/kafka-mirror-maker.yaml",
"apply -f examples/mirror-maker/kafka-mirror-maker-2.yaml",
"get deployments -n <my_cluster_operator_namespace>",
"NAME READY UP-TO-DATE AVAILABLE my-mirror-maker-mirror-maker 1/1 1 1 my-mm2-cluster-mirrormaker2 1/1 1 1",
"apply -f examples/bridge/kafka-bridge.yaml",
"get deployments -n <my_cluster_operator_namespace>",
"NAME READY UP-TO-DATE AVAILABLE my-bridge-bridge 1/1 1 1",
"get pods -o name pod/kafka-consumer pod/quickstart-bridge-589d78784d-9jcnr pod/strimzi-cluster-operator-76bcf9bc76-8dnfm",
"port-forward pod/quickstart-bridge-589d78784d-9jcnr 8080:8080 &",
"selector: strimzi.io/cluster: kafka-bridge-name 1 strimzi.io/kind: KafkaBridge #"
] | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/deploying_and_upgrading_amq_streams_on_openshift/deploy-tasks_str |
Chapter 3. Installing the Network Observability Operator | Chapter 3. Installing the Network Observability Operator Installing Loki is a recommended prerequisite for using the Network Observability Operator. You can choose to use Network Observability without Loki , but there are some considerations for doing this, described in the previously linked section. The Loki Operator integrates a gateway that implements multi-tenancy and authentication with Loki for data flow storage. The LokiStack resource manages Loki, which is a scalable, highly-available, multi-tenant log aggregation system, and a web proxy with OpenShift Container Platform authentication. The LokiStack proxy uses OpenShift Container Platform authentication to enforce multi-tenancy and facilitate the saving and indexing of data in Loki log stores. Note The Loki Operator can also be used for configuring the LokiStack log store]. The Network Observability Operator requires a dedicated LokiStack separate from the logging. 3.1. Network Observability without Loki You can use Network Observability without Loki by not performing the Loki installation steps and skipping directly to "Installing the Network Observability Operator". If you only want to export flows to a Kafka consumer or IPFIX collector, or you only need dashboard metrics, then you do not need to install Loki or provide storage for Loki. The following table compares available features with and without Loki. Table 3.1. Comparison of feature availability with and without Loki With Loki Without Loki Exporters Multi-tenancy Complete filtering and aggregations capabilities [1] Partial filtering and aggregations capabilities [2] Flow-based metrics and dashboards Traffic flows view overview [3] Traffic flows view table Topology view OpenShift Container Platform console Network Traffic tab integration Such as per pod. Such as per workload or namespace. Statistics on packet drops are only available with Loki. Additional resources Export enriched network flow data . 3.2. Installing the Loki Operator The Loki Operator versions 5.7+ are the supported Loki Operator versions for Network Observability; these versions provide the ability to create a LokiStack instance using the openshift-network tenant configuration mode and provide fully-automatic, in-cluster authentication and authorization support for Network Observability. There are several ways you can install Loki. One way is by using the OpenShift Container Platform web console Operator Hub. Prerequisites Supported Log Store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation) OpenShift Container Platform 4.10+ Linux Kernel 4.18+ Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Loki Operator from the list of available Operators, and click Install . Under Installation Mode , select All namespaces on the cluster . Verification Verify that you installed the Loki Operator. Visit the Operators Installed Operators page and look for Loki Operator . Verify that Loki Operator is listed with Status as Succeeded in all the projects. Important To uninstall Loki, refer to the uninstallation process that corresponds with the method you used to install Loki. You might have remaining ClusterRoles and ClusterRoleBindings , data stored in object store, and persistent volume that must be removed. 3.2.1. Creating a secret for Loki storage The Loki Operator supports a few log storage options, such as AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation. The following example shows how to create a secret for AWS S3 storage. The secret created in this example, loki-s3 , is referenced in "Creating a LokiStack resource". You can create this secret in the web console or CLI. Using the web console, navigate to the Project All Projects dropdown and select Create Project . Name the project netobserv and click Create . Navigate to the Import icon, + , in the top right corner. Paste your YAML file into the editor. The following shows an example secret YAML file for S3 storage: apiVersion: v1 kind: Secret metadata: name: loki-s3 namespace: netobserv 1 stringData: access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 1 The installation examples in this documentation use the same namespace, netobserv , across all components. You can optionally use a different namespace for the different components Verification Once you create the secret, you should see it listed under Workloads Secrets in the web console. Additional resources Flow Collector API Reference Flow Collector sample resource 3.2.2. Creating a LokiStack custom resource You can deploy a LokiStack custom resource (CR) by using the web console or OpenShift CLI ( oc ) to create a namespace, or new project. Procedure Navigate to Operators Installed Operators , viewing All projects from the Project dropdown. Look for Loki Operator . In the details, under Provided APIs , select LokiStack . Click Create LokiStack . Ensure the following fields are specified in either Form View or YAML view : apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv 1 spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 3 tenants: mode: openshift-network 1 The installation examples in this documentation use the same namespace, netobserv , across all components. You can optionally use a different namespace. 2 Specify the deployment size. In the Loki Operator 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . Important It is not possible to change the number 1x for the deployment size. 3 Use a storage class name that is available on the cluster for ReadWriteOnce access mode. You can use oc get storageclasses to see what is available on your cluster. Important You must not reuse the same LokiStack CR that is used for logging. Click Create . 3.2.3. Creating a new group for the cluster-admin user role Important Querying application logs for multiple namespaces as a cluster-admin user, where the sum total of characters of all of the namespaces in the cluster is greater than 5120, results in the error Parse error: input size too long (XXXX > 5120) . For better control over access to logs in LokiStack, make the cluster-admin user a member of the cluster-admin group. If the cluster-admin group does not exist, create it and add the desired users to it. Use the following procedure to create a new group for users with cluster-admin permissions. Procedure Enter the following command to create a new group: USD oc adm groups new cluster-admin Enter the following command to add the desired user to the cluster-admin group: USD oc adm groups add-users cluster-admin <username> Enter the following command to add cluster-admin user role to the group: USD oc adm policy add-cluster-role-to-group cluster-admin cluster-admin 3.2.4. Custom admin group access If you need to see cluster-wide logs without necessarily being an administrator, or if you already have any group defined that you want to use here, you can specify a custom group using the adminGroup field. Users who are members of any group specified in the adminGroups field of the LokiStack custom resource (CR) have the same read access to logs as administrators. Administrator users have access to all application logs in all namespaces, if they also get assigned the cluster-logging-application-view role. Administrator users have access to all network logs across the cluster. Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: tenants: mode: openshift-network 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3 1 Custom admin groups are only available in this mode. 2 Entering an empty list [] value for this field disables admin groups. 3 Overrides the default groups ( system:cluster-admins , cluster-admin , dedicated-admin ) 3.2.5. Loki deployment sizing Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities. Important It is not possible to change the number 1x for the deployment size. Table 3.2. Loki sizing 1x.demo 1x.extra-small 1x.small 1x.medium Data transfer Demo use only 100GB/day 500GB/day 2TB/day Queries per second (QPS) Demo use only 1-25 QPS at 200ms 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 2 2 Total CPU requests None 14 vCPUs 34 vCPUs 54 vCPUs Total memory requests None 31Gi 67Gi 139Gi Total disk requests 40Gi 430Gi 430Gi 590Gi 3.2.6. LokiStack ingestion limits and health alerts The LokiStack instance comes with default settings according to the configured size. It is possible to override some of these settings, such as the ingestion and query limits. You might want to update them if you get Loki errors showing up in the Console plugin, or in flowlogs-pipeline logs. An automatic alert in the web console notifies you when these limits are reached. Here is an example of configured limits: spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000 For more information about these settings, see the LokiStack API reference . 3.3. Installing the Network Observability Operator You can install the Network Observability Operator using the OpenShift Container Platform web console Operator Hub. When you install the Operator, it provides the FlowCollector custom resource definition (CRD). You can set specifications in the web console when you create the FlowCollector . Important The actual memory consumption of the Operator depends on your cluster size and the number of resources deployed. Memory consumption might need to be adjusted accordingly. For more information refer to "Network Observability controller manager pod runs out of memory" in the "Important Flow Collector configuration considerations" section. Prerequisites If you choose to use Loki, install the Loki Operator version 5.7+ . You must have cluster-admin privileges. One of the following supported architectures is required: amd64 , ppc64le , arm64 , or s390x . Any CPU supported by Red Hat Enterprise Linux (RHEL) 9. Must be configured with OVN-Kubernetes as the main network plugin, and optionally using secondary interfaces with Multus and SR-IOV. Note Additionally, this installation example uses the netobserv namespace, which is used across all components. You can optionally use a different namespace. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Network Observability Operator from the list of available Operators in the OperatorHub , and click Install . Select the checkbox Enable Operator recommended cluster monitoring on this Namespace . Navigate to Operators Installed Operators . Under Provided APIs for Network Observability, select the Flow Collector link. Navigate to the Flow Collector tab, and click Create FlowCollector . Make the following selections in the form view: spec.agent.ebpf.Sampling : Specify a sampling size for flows. Lower sampling sizes will have higher impact on resource utilization. For more information, see the "FlowCollector API reference", spec.agent.ebpf . If you are not using Loki, click Loki client settings and change Enable to False . The setting is True by default. If you are using Loki, set the following specifications: spec.loki.mode : Set this to the LokiStack mode, which automatically sets URLs, TLS, cluster roles and a cluster role binding, as well as the authToken value. Alternatively, the Manual mode allows more control over configuration of these settings. spec.loki.lokistack.name : Set this to the name of your LokiStack resource. In this documentation, loki is used. Optional: If you are in a large-scale environment, consider configuring the FlowCollector with Kafka for forwarding data in a more resilient, scalable way. See "Configuring the Flow Collector resource with Kafka storage" in the "Important Flow Collector configuration considerations" section. Optional: Configure other optional settings before the step of creating the FlowCollector . For example, if you choose not to use Loki, then you can configure exporting flows to Kafka or IPFIX. See "Export enriched network flow data to Kafka and IPFIX" and more in the "Important Flow Collector configuration considerations" section. Click Create . Verification To confirm this was successful, when you navigate to Observe you should see Network Traffic listed in the options. In the absence of Application Traffic within the OpenShift Container Platform cluster, default filters might show that there are "No results", which results in no visual flow. Beside the filter selections, select Clear all filters to see the flow. 3.4. Enabling multi-tenancy in Network Observability Multi-tenancy in the Network Observability Operator allows and restricts individual user access, or group access, to the flows stored in Loki and or Prometheus. Access is enabled for project administrators. Project administrators who have limited access to some namespaces can access flows for only those namespaces. For Developers, multi-tenancy is available for both Loki and Prometheus but requires different access rights. Prerequisite If you are using Loki, you have installed at least Loki Operator version 5.7 . You must be logged in as a project administrator. Procedure For per-tenant access, you must have the netobserv-reader cluster role and the netobserv-metrics-reader namespace role to use the developer perspective. Run the following commands for this level of access: USD oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name> USD oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace> For cluster-wide access, non-cluster-administrators must have the netobserv-reader , cluster-monitoring-view , and netobserv-metrics-reader cluster roles. In this scenario, you can use either the admin perspective or the developer perspective. Run the following commands for this level of access: USD oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name> USD oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_name> USD oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name> 3.5. Important Flow Collector configuration considerations Once you create the FlowCollector instance, you can reconfigure it, but the pods are terminated and recreated again, which can be disruptive. Therefore, you can consider configuring the following options when creating the FlowCollector for the first time: Configuring the Flow Collector resource with Kafka Export enriched network flow data to Kafka or IPFIX Configuring monitoring for SR-IOV interface traffic Working with conversation tracking Working with DNS tracking Working with packet drops Additional resources For more general information about Flow Collector specifications and the Network Observability Operator architecture and resource use, see the following resources: Flow Collector API Reference Flow Collector sample resource Resource considerations Troubleshooting Network Observability controller manager pod runs out of memory Network Observability architecture 3.5.1. Migrating removed stored versions of the FlowCollector CRD Network Observability Operator version 1.6 removes the old and deprecated v1alpha1 version of the FlowCollector API. If you previously installed this version on your cluster, it might still be referenced in the storedVersion of the FlowCollector CRD, even if it is removed from the etcd store, which blocks the upgrade process. These references need to be manually removed. There are two options to remove stored versions: Use the Storage Version Migrator Operator. Uninstall and reinstall the Network Observability Operator, ensuring that the installation is in a clean state. Prerequisites You have an older version of the Operator installed, and you want to prepare your cluster to install the latest version of the Operator. Or you have attempted to install the Network Observability Operator 1.6 and run into the error: Failed risk of data loss updating "flowcollectors.flows.netobserv.io": new CRD removes version v1alpha1 that is listed as a stored version on the existing CRD . Procedure Verify that the old FlowCollector CRD version is still referenced in the storedVersion : USD oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}' If v1alpha1 appears in the list of results, proceed with Step a to use the Kubernetes Storage Version Migrator or Step b to uninstall and reinstall the CRD and the Operator. Option 1: Kubernetes Storage Version Migrator : Create a YAML to define the StorageVersionMigration object, for example migrate-flowcollector-v1alpha1.yaml : apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: migrate-flowcollector-v1alpha1 spec: resource: group: flows.netobserv.io resource: flowcollectors version: v1alpha1 Save the file. Apply the StorageVersionMigration by running the following command: USD oc apply -f migrate-flowcollector-v1alpha1.yaml Update the FlowCollector CRD to manually remove v1alpha1 from the storedVersion : USD oc edit crd flowcollectors.flows.netobserv.io Option 2: Reinstall : Save the Network Observability Operator 1.5 version of the FlowCollector CR to a file, for example flowcollector-1.5.yaml . USD oc get flowcollector cluster -o yaml > flowcollector-1.5.yaml Follow the steps in "Uninstalling the Network Observability Operator", which uninstalls the Operator and removes the existing FlowCollector CRD. Install the Network Observability Operator latest version, 1.6.0. Create the FlowCollector using backup that was saved in Step b. Verification Run the following command: USD oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}' The list of results should no longer show v1alpha1 and only show the latest version, v1beta1 . Additional resources Kubernetes Storage Version Migrator Operator 3.6. Installing Kafka (optional) The Kafka Operator is supported for large scale environments. Kafka provides high-throughput and low-latency data feeds for forwarding network flow data in a more resilient, scalable way. You can install the Kafka Operator as Red Hat AMQ Streams from the Operator Hub, just as the Loki Operator and Network Observability Operator were installed. Refer to "Configuring the FlowCollector resource with Kafka" to configure Kafka as a storage option. Note To uninstall Kafka, refer to the uninstallation process that corresponds with the method you used to install. Additional resources Configuring the FlowCollector resource with Kafka . 3.7. Uninstalling the Network Observability Operator You can uninstall the Network Observability Operator using the OpenShift Container Platform web console Operator Hub, working in the Operators Installed Operators area. Procedure Remove the FlowCollector custom resource. Click Flow Collector , which is to the Network Observability Operator in the Provided APIs column. Click the Options menu for the cluster and select Delete FlowCollector . Uninstall the Network Observability Operator. Navigate back to the Operators Installed Operators area. Click the Options menu to the Network Observability Operator and select Uninstall Operator . Home Projects and select openshift-netobserv-operator Navigate to Actions and select Delete Project Remove the FlowCollector custom resource definition (CRD). Navigate to Administration CustomResourceDefinitions . Look for FlowCollector and click the Options menu . Select Delete CustomResourceDefinition . Important The Loki Operator and Kafka remain if they were installed and must be removed separately. Additionally, you might have remaining data stored in an object store, and a persistent volume that must be removed. | [
"apiVersion: v1 kind: Secret metadata: name: loki-s3 namespace: netobserv 1 stringData: access_key_id: QUtJQUlPU0ZPRE5ON0VYQU1QTEUK access_key_secret: d0phbHJYVXRuRkVNSS9LN01ERU5HL2JQeFJmaUNZRVhBTVBMRUtFWQo= bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv 1 spec: size: 1x.small 2 storage: schemas: - version: v12 effectiveDate: '2022-06-01' secret: name: loki-s3 type: s3 storageClassName: gp3 3 tenants: mode: openshift-network",
"oc adm groups new cluster-admin",
"oc adm groups add-users cluster-admin <username>",
"oc adm policy add-cluster-role-to-group cluster-admin cluster-admin",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: tenants: mode: openshift-network 1 openshift: adminGroups: 2 - cluster-admin - custom-admin-group 3",
"spec: limits: global: ingestion: ingestionBurstSize: 40 ingestionRate: 20 maxGlobalStreamsPerTenant: 25000 queries: maxChunksPerQuery: 2000000 maxEntriesLimitPerQuery: 10000 maxQuerySeries: 3000",
"oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>",
"oc adm policy add-role-to-user netobserv-metrics-reader <user_group_or_name> -n <namespace>",
"oc adm policy add-cluster-role-to-user netobserv-reader <user_group_or_name>",
"oc adm policy add-cluster-role-to-user cluster-monitoring-view <user_group_or_name>",
"oc adm policy add-cluster-role-to-user netobserv-metrics-reader <user_group_or_name>",
"oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'",
"apiVersion: migration.k8s.io/v1alpha1 kind: StorageVersionMigration metadata: name: migrate-flowcollector-v1alpha1 spec: resource: group: flows.netobserv.io resource: flowcollectors version: v1alpha1",
"oc apply -f migrate-flowcollector-v1alpha1.yaml",
"oc edit crd flowcollectors.flows.netobserv.io",
"oc get flowcollector cluster -o yaml > flowcollector-1.5.yaml",
"oc get crd flowcollectors.flows.netobserv.io -ojsonpath='{.status.storedVersions}'"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/network_observability/installing-network-observability-operators |
Service binding | Service binding Red Hat build of Quarkus 3.15 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/service_binding/index |
Chapter 46. Jira | Chapter 46. Jira Both producer and consumer are supported The JIRA component interacts with the JIRA API by encapsulating Atlassian's REST Java Client for JIRA . It currently provides polling for new issues and new comments. It is also able to create new issues, add comments, change issues, add/remove watchers, add attachment and transition the state of an issue. Rather than webhooks, this endpoint relies on simple polling. Reasons include: Concern for reliability/stability The types of payloads we're polling aren't typically large (plus, paging is available in the API) The need to support apps running somewhere not publicly accessible where a webhook would fail Note that the JIRA API is fairly expansive. Therefore, this component could be easily expanded to provide additional interactions. 46.1. Dependencies When using jira with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jira-starter</artifactId> </dependency> 46.2. URI format The Jira type accepts the following operations: For consumers: newIssues: retrieve only new issues after the route is started newComments: retrieve only new comments after the route is started watchUpdates: retrieve only updated fields/issues based on provided jql For producers: addIssue: add an issue addComment: add a comment on a given issue attach: add an attachment on a given issue deleteIssue: delete a given issue updateIssue: update fields of a given issue transitionIssue: transition a status of a given issue watchers: add/remove watchers of a given issue As Jira is fully customizable, you must assure the fields IDs exists for the project and workflow, as they can change between different Jira servers. 46.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 46.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 46.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 46.4. Component Options The Jira component supports 12 options, which are listed below. Name Description Default Type delay (common) Time in milliseconds to elapse for the poll. 6000 Integer jiraUrl (common) Required The Jira server url, example: . String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean configuration (advanced) To use a shared base jira configuration. JiraConfiguration accessToken (security) (OAuth only) The access token generated by the Jira server. String consumerKey (security) (OAuth only) The consumer key from Jira settings. String password (security) (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. String privateKey (security) (OAuth only) The private key generated by the client to encrypt the conversation to the server. String username (security) (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. String verificationCode (security) (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. String 46.5. Endpoint Options The Jira endpoint is configured using URI syntax: with the following path and query parameters: 46.5.1. Path Parameters (1 parameters) Name Description Default Type type (common) Required Operation to perform. Consumers: NewIssues, NewComments. Producers: AddIssue, AttachFile, DeleteIssue, TransitionIssue, UpdateIssue, Watchers. See this class javadoc description for more information. Enum values: ADDCOMMENT ADDISSUE ATTACH DELETEISSUE NEWISSUES NEWCOMMENTS WATCHUPDATES UPDATEISSUE TRANSITIONISSUE WATCHERS ADDISSUELINK ADDWORKLOG FETCHISSUE FETCHCOMMENTS JiraType 46.5.2. Query Parameters (16 parameters) Name Description Default Type delay (common) Time in milliseconds to elapse for the poll. 6000 Integer jiraUrl (common) Required The Jira server url, example: . String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean jql (consumer) JQL is the query language from JIRA which allows you to retrieve the data you want. For example jql=project=MyProject Where MyProject is the product key in Jira. It is important to use the RAW() and set the JQL inside it to prevent camel parsing it, example: RAW(project in (MYP, COM) AND resolution = Unresolved). String maxResults (consumer) Max number of issues to search for. 50 Integer sendOnlyUpdatedField (consumer) Indicator for sending only changed fields in exchange body or issue object. By default consumer sends only changed fields. true boolean watchedFields (consumer) Comma separated list of fields to watch for changes. Status,Priority are the defaults. Status,Priority String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean accessToken (security) (OAuth only) The access token generated by the Jira server. String consumerKey (security) (OAuth only) The consumer key from Jira settings. String password (security) (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. String privateKey (security) (OAuth only) The private key generated by the client to encrypt the conversation to the server. String username (security) (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. String verificationCode (security) (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. String 46.6. Client Factory You can bind the JiraRestClientFactory with name JiraRestClientFactory in the registry to have it automatically set in the Jira endpoint. 46.7. Authentication Camel-jira supports Basic Authentication and OAuth 3 legged authentication . We recommend to use OAuth whenever possible, as it provides the best security for your users and system. 46.7.1. Basic authentication requirements: An username and password 46.7.2. OAuth authentication requirements: Follow the tutorial in Jira OAuth documentation to generate the client private key, consumer key, verification code and access token. a private key, generated locally on your system. A verification code, generated by Jira server. The consumer key, set in the Jira server settings. An access token, generated by Jira server. 46.8. JQL The JQL URI option is used by both consumer endpoints. Theoretically, items like "project key", etc. could be URI options themselves. However, by requiring the use of JQL, the consumers become much more flexible and powerful. At the bare minimum, the consumers will require the following: One important thing to note is that the newIssues consumer will automatically set the JQL as: append ORDER BY key desc to your JQL prepend id > latestIssueId to retrieve issues added after the camel route was started. This is in order to optimize startup processing, rather than having to index every single issue in the project. Another note is that, similarly, the newComments consumer will have to index every single issue and comment in the project. Therefore, for large projects, it's vital to optimize the JQL expression as much as possible. For example, the JIRA Toolkit Plugin includes a "Number of comments" custom field - use '"Number of comments" > 0' in your query. Also try to minimize based on state (status=Open), increase the polling delay, etc. Example: 46.9. Operations See a list of required headers to set when using the Jira operations. The author field for the producers is automatically set to the authenticated user in the Jira side. If any required field is not set, then an IllegalArgumentException is throw. There are operations that requires id for fields suchs as: issue type, priority, transition. Check the valid id on your jira project as they may differ on a jira installation and project workflow. 46.10. AddIssue Required: ProjectKey : The project key, example: CAMEL, HHH, MYP. IssueTypeId or IssueTypeName : The id of the issue type or the name of the issue type, you can see the valid list in http://jira_server/rest/api/2/issue/createmeta?projectKeys=SAMPLE_KEY . IssueSummary : The summary of the issue. Optional: IssueAssignee : the assignee user IssuePriorityId or IssuePriorityName : The priority of the issue, you can see the valid list in http://jira_server/rest/api/2/priority . IssueComponents : A list of string with the valid component names. IssueWatchersAdd : A list of strings with the usernames to add to the watcher list. IssueDescription : The description of the issue. 46.11. AddComment Required: IssueKey : The issue key identifier. body of the exchange is the description. 46.12. Attach Only one file should attach per invocation. Required: IssueKey : The issue key identifier. body of the exchange should be of type File 46.13. DeleteIssue Required: IssueKey : The issue key identifier. 46.14. TransitionIssue Required: IssueKey : The issue key identifier. IssueTransitionId : The issue transition id . body of the exchange is the description. 46.15. UpdateIssue IssueKey : The issue key identifier. IssueTypeId or IssueTypeName : The id of the issue type or the name of the issue type, you can see the valid list in http://jira_server/rest/api/2/issue/createmeta?projectKeys=SAMPLE_KEY . IssueSummary : The summary of the issue. IssueAssignee : the assignee user IssuePriorityId or IssuePriorityName : The priority of the issue, you can see the valid list in http://jira_server/rest/api/2/priority . IssueComponents : A list of string with the valid component names. IssueDescription : The description of the issue. 46.16. Watcher IssueKey : The issue key identifier. IssueWatchersAdd : A list of strings with the usernames to add to the watcher list. IssueWatchersRemove : A list of strings with the usernames to remove from the watcher list. 46.17. WatchUpdates (consumer) watchedFields Comma separated list of fields to watch for changes i.e Status,Priority,Assignee,Components etc. sendOnlyUpdatedField By default only changed field is send as the body. All messages also contain following headers that add additional info about the change: issueKey : Key of the updated issue changed : name of the updated field (i.e Status) watchedIssues : list of all issue keys that are watched in the time of update 46.18. Spring Boot Auto-Configuration The component supports 13 options, which are listed below. Name Description Default Type camel.component.jira.access-token (OAuth only) The access token generated by the Jira server. String camel.component.jira.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.jira.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.jira.configuration To use a shared base jira configuration. The option is a org.apache.camel.component.jira.JiraConfiguration type. JiraConfiguration camel.component.jira.consumer-key (OAuth only) The consumer key from Jira settings. String camel.component.jira.delay Time in milliseconds to elapse for the poll. 6000 Integer camel.component.jira.enabled Whether to enable auto configuration of the jira component. This is enabled by default. Boolean camel.component.jira.jira-url The Jira server url, example: http://my_jira.com:8081/ . String camel.component.jira.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.jira.password (Basic authentication only) The password to authenticate to the Jira server. Use only if username basic authentication is used. String camel.component.jira.private-key (OAuth only) The private key generated by the client to encrypt the conversation to the server. String camel.component.jira.username (Basic authentication only) The username to authenticate to the Jira server. Use only if OAuth is not enabled on the Jira server. Do not set the username and OAuth token parameter, if they are both set, the username basic authentication takes precedence. String camel.component.jira.verification-code (OAuth only) The verification code from Jira generated in the first step of the authorization proccess. String | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-jira-starter</artifactId> </dependency>",
"jira://type[?options]",
"jira:type",
"jira://[type]?[required options]&jql=project=[project key]",
"jira://[type]?[required options]&jql=RAW(project=[project key] AND status in (Open, \\\"Coding In Progress\\\") AND \\\"Number of comments\\\">0)\""
] | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-jira-component-starter |
Chapter 11. Monitoring application health by using health checks | Chapter 11. Monitoring application health by using health checks In software systems, components can become unhealthy due to transient issues such as temporary connectivity loss, configuration errors, or problems with external dependencies. OpenShift Container Platform applications have a number of options to detect and handle unhealthy containers. 11.1. Understanding health checks A health check periodically performs diagnostics on a running container using any combination of the readiness, liveness, and startup health checks. You can include one or more probes in the specification for the pod that contains the container which you want to perform the health checks. Note If you want to add or edit health checks in an existing pod, you must edit the pod DeploymentConfig object or use the Developer perspective in the web console. You cannot use the CLI to add or edit health checks for an existing pod. Readiness probe A readiness probe determines if a container is ready to accept service requests. If the readiness probe fails for a container, the kubelet removes the pod from the list of available service endpoints. After a failure, the probe continues to examine the pod. If the pod becomes available, the kubelet adds the pod to the list of available service endpoints. Liveness health check A liveness probe determines if a container is still running. If the liveness probe fails due to a condition such as a deadlock, the kubelet kills the container. The pod then responds based on its restart policy. For example, a liveness probe on a pod with a restartPolicy of Always or OnFailure kills and restarts the container. Startup probe A startup probe indicates whether the application within a container is started. All other probes are disabled until the startup succeeds. If the startup probe does not succeed within a specified time period, the kubelet kills the container, and the container is subject to the pod restartPolicy . Some applications can require additional startup time on their first initialization. You can use a startup probe with a liveness or readiness probe to delay that probe long enough to handle lengthy start-up time using the failureThreshold and periodSeconds parameters. For example, you can add a startup probe, with a failureThreshold of 30 failures and a periodSeconds of 10 seconds (30 * 10s = 300s) for a maximum of 5 minutes, to a liveness probe. After the startup probe succeeds the first time, the liveness probe takes over. You can configure liveness, readiness, and startup probes with any of the following types of tests: HTTP GET : When using an HTTP GET test, the test determines the healthiness of the container by using a web hook. The test is successful if the HTTP response code is between 200 and 399 . You can use an HTTP GET test with applications that return HTTP status codes when completely initialized. Container Command: When using a container command test, the probe executes a command inside the container. The probe is successful if the test exits with a 0 status. TCP socket: When using a TCP socket test, the probe attempts to open a socket to the container. The container is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete. You can configure several fields to control the behavior of a probe: initialDelaySeconds : The time, in seconds, after the container starts before the probe can be scheduled. The default is 0. periodSeconds : The delay, in seconds, between performing probes. The default is 10 . This value must be greater than timeoutSeconds . timeoutSeconds : The number of seconds of inactivity after which the probe times out and the container is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . successThreshold : The number of times that the probe must report success after a failure to reset the container status to successful. The value must be 1 for a liveness probe. The default is 1 . failureThreshold : The number of times that the probe is allowed to fail. The default is 3. After the specified attempts: for a liveness probe, the container is restarted for a readiness probe, the pod is marked Unready for a startup probe, the container is killed and is subject to the pod's restartPolicy Example probes The following are samples of different probes as they would appear in an object specification. Sample readiness probe with a container command readiness probe in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application # ... spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy # ... 1 The container name. 2 The container image to deploy. 3 A readiness probe. 4 A container command test. 5 The commands to execute on the container. Sample container command startup probe and liveness probe with container command tests in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application # ... spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11 # ... 1 The container name. 2 Specify the container image to deploy. 3 A liveness probe. 4 An HTTP GET test. 5 The internet scheme: HTTP or HTTPS . The default value is HTTP . 6 The port on which the container is listening. 7 A startup probe. 8 An HTTP GET test. 9 The port on which the container is listening. 10 The number of times to try the probe after a failure. 11 The number of seconds to perform the probe. Sample liveness probe with a container command test that uses a timeout in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application # ... spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8 # ... 1 The container name. 2 Specify the container image to deploy. 3 The liveness probe. 4 The type of probe, here a container command probe. 5 The command line to execute inside the container. 6 How often in seconds to perform the probe. 7 The number of consecutive successes needed to show success after a failure. 8 The number of times to try the probe after a failure. Sample readiness probe and liveness probe with a TCP socket test in a deployment kind: Deployment apiVersion: apps/v1 metadata: labels: test: health-check name: my-application spec: # ... template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 # ... 1 The readiness probe. 2 The liveness probe. 11.2. Configuring health checks using the CLI To configure readiness, liveness, and startup probes, add one or more probes to the specification for the pod that contains the container which you want to perform the health checks Note If you want to add or edit health checks in an existing pod, you must edit the pod DeploymentConfig object or use the Developer perspective in the web console. You cannot use the CLI to add or edit health checks for an existing pod. Procedure To add probes for a container: Create a Pod object to add one or more probes: apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19 1 Specify the container name. 2 Specify the container image to deploy. 3 Optional: Create a Liveness probe. 4 Specify a test to perform, here a TCP Socket test. 5 Specify the port on which the container is listening. 6 Specify the time, in seconds, after the container starts before the probe can be scheduled. 7 Specify the number of seconds to perform the probe. The default is 10 . This value must be greater than timeoutSeconds . 8 Specify the number of seconds of inactivity after which the probe is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . 9 Optional: Create a Readiness probe. 10 Specify the type of test to perform, here an HTTP test. 11 Specify a host IP address. When host is not defined, the PodIP is used. 12 Specify HTTP or HTTPS . When scheme is not defined, the HTTP scheme is used. 13 Specify the port on which the container is listening. 14 Optional: Create a Startup probe. 15 Specify the type of test to perform, here an Container Execution probe. 16 Specify the commands to execute on the container. 17 Specify the number of times to try the probe after a failure. 18 Specify the number of seconds to perform the probe. The default is 10 . This value must be greater than timeoutSeconds . 19 Specify the number of seconds of inactivity after which the probe is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . Note If the initialDelaySeconds value is lower than the periodSeconds value, the first Readiness probe occurs at some point between the two periods due to an issue with timers. The timeoutSeconds value must be lower than the periodSeconds value. Create the Pod object: USD oc create -f <file-name>.yaml Verify the state of the health check pod: USD oc describe pod my-application Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image "registry.k8s.io/liveness" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image "registry.k8s.io/liveness" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container The following is the output of a failed probe that restarted a container: Sample Liveness check output with unhealthy container USD oc describe pod pod1 Example output .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "registry.k8s.io/liveness" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "registry.k8s.io/liveness" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image "registry.k8s.io/liveness" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "registry.k8s.io/liveness" in 244.116568ms 11.3. Monitoring application health using the Developer perspective You can use the Developer perspective to add three types of health probes to your container to ensure that your application is healthy: Use the Readiness probe to check if the container is ready to handle requests. Use the Liveness probe to check if the container is running. Use the Startup probe to check if the application within the container has started. You can add health checks either while creating and deploying an application, or after you have deployed an application. 11.4. Editing health checks using the Developer perspective You can use the Topology view to edit health checks added to your application, modify them, or add more health checks. Prerequisites You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. You have added health checks to your application. Procedure In the Topology view, right-click your application and select Edit Health Checks . Alternatively, in the side panel, click the Actions drop-down list and select Edit Health Checks . In the Edit Health Checks page: To remove a previously added health probe, click the Remove icon adjoining it. To edit the parameters of an existing probe: Click the Edit Probe link to a previously added probe to see the parameters for the probe. Modify the parameters as required, and click the check mark to save your changes. To add a new health probe, in addition to existing health checks, click the add probe links. For example, to add a Liveness probe that checks if your container is running: Click Add Liveness Probe , to see a form containing the parameters for the probe. Edit the probe parameters as required. Note The Timeout value must be lower than the Period value. The Timeout default value is 1 . The Period default value is 10 . Click the check mark at the bottom of the form. The Liveness Probe Added message is displayed. Click Save to save your modifications and add the additional probes to your container. You are redirected to the Topology view. In the side panel, verify that the probes have been added by clicking on the deployed pod under the Pods section. In the Pod Details page, click the listed container in the Containers section. In the Container Details page, verify that the Liveness probe - HTTP Get 10.129.4.65:8080/ has been added to the container, in addition to the earlier existing probes. 11.5. Monitoring health check failures using the Developer perspective In case an application health check fails, you can use the Topology view to monitor these health check violations. Prerequisites You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. You have added health checks to your application. Procedure In the Topology view, click on the application node to see the side panel. Click the Observe tab to see the health check failures in the Events (Warning) section. Click the down arrow adjoining Events (Warning) to see the details of the health check failure. Additional resources For details on switching to the Developer perspective in the web console, see About the Developer perspective . For details on adding health checks while creating and deploying an application, see Advanced Options in the Creating applications using the Developer perspective section. | [
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8",
"kind: Deployment apiVersion: apps/v1 metadata: labels: test: health-check name: my-application spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3",
"apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19",
"oc create -f <file-name>.yaml",
"oc describe pod my-application",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"registry.k8s.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"registry.k8s.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container",
"oc describe pod pod1",
". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"registry.k8s.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 244.116568ms"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/building_applications/application-health |
Policy APIs | Policy APIs OpenShift Container Platform 4.14 Reference guide for policy APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/policy_apis/index |
1.6. Data Centers | 1.6. Data Centers A data center is the highest level of abstraction in Red Hat Virtualization. A data center contains three types of information: Storage This includes storage types, storage domains, and connectivity information for storage domains. Storage is defined for a data center, and available to all clusters in the data center. All host clusters within a data center have access to the same storage domains. Logical networks This includes details such as network addresses, VLAN tags and STP support. Logical networks are defined for a data center, and are optionally implemented at the cluster level. Clusters Clusters are groups of hosts with compatible processor cores, either AMD or Intel processors. Clusters are migration domains; virtual machines can be live-migrated to any host within a cluster, and not to other clusters. One data center can hold multiple clusters, and each cluster can contain multiple hosts. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/data_centers2 |
Chapter 4. Managing connection types | Chapter 4. Managing connection types In Red Hat OpenShift AI, a connection comprises environment variables along with their respective values. Data scientists can add connections to project resources, such as workbenches and model servers. When a data scientist creates a connection, they start by selecting a connection type. Connection types are templates that include customizable fields and optional default values. Starting with a connection type decreases the time required by a user to add connections to data sources and sinks. OpenShift AI includes pre-installed connection types for S3-compatible object storage databases and URI-based repositories. As an OpenShift AI administrator, you can manage connection types for users in your organization as follows: View connection types and preview user connection forms Create a connection type Duplicate an existing connection type Edit a connection type Delete a custom connection type Enable or disable a connection type in a project, to control whether it is available as an option to users when they create a connection 4.1. Viewing connection types As an OpenShift AI administrator, you can view the connection types that are available in a project. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Connection types . The Connection types page appears, displaying the available connection types for the current project. Optionally, you can select the Options menu and then click Preview to see how the connection form associated with the connection type appears to your users. 4.2. Creating a connection type As an OpenShift AI administrator, you can create a connection type for users in your organization. You can create a new connection type as described in this procedure or you can create a copy of an existing connection type and edit it, as described in Duplicating a connection type . Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You know the environment variables that are required or optional for the connection type that you want to create. Procedure From the OpenShift AI dashboard, click Settings Connection types . The Connection types page appears, displaying the available connection types. Click Create connection type . In the Create connection type form, enter the following information: Enter a name for the connection type. A resource name is generated based on the name of the connection type. A resource name is the label for the underlying resource in OpenShift. Optionally, edit the default resource name. Note that you cannot change the resource name after you create the connection type. Optionally, provide a description of the connection type. Specify at least one category label. By default, the category labels are database, model registry, object storage, and URI. Optionally, you can create a new category by typing the new category label in the field. You can specify more than one category. The category label is for descriptive purposes only. It allows you and the users in your origanization to sort the available connection types when viewing them in the OpenShift AI dashboard interface. Check the Enable users in your organization to use this connection type when adding connections" option if you want the connection type to appear in the list of connections available to users, for example, when they configure a workbench, a model server, or a pipeline. Note that you can also enable/disable the connection type after you create it. For the Fields section, add the fields and section headings that you want your users to see in the form when they add a connection to a project resource (such as a workbench or a model server). Note that the connection name and description fields are included by default, so you do not need to add them. Optionally, select a model serving compatible type to automatically add the fields required to use its corresponding model serving method. Click Add field to add a field to prompt users to input information, and optionally assign default values to those fields. Click Add section heading to organize the fields under headings. Click Preview to open a preview of the connection form as it will appear to your users. Click Save . Verification On the Settings Connection types page, the new connection type appears in the list. 4.3. Duplicating a connection type As an OpenShift AI administrator, you can create a new connection type by duplicating an existing one, as described in this procedure, or you can create a new connection type as described in Creating a connection type . You might also want to duplicate a connection type if you want to create versions of a specific connection type. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Connection types . From the list of available connection types, find the connection type that you want to duplicate. Optionally, you can select the Options menu and then click Preview to see how the related connection form appears to your users. Click the Options menu , and then click Duplicate . The Create connection type form appears populated with the information from the connection type that you duplicated. Edit the form according to your use case. Click Preview to open a preview of the connection form as it will appear to your users and verify that the form appears as you expect. Click Save . Verification In the Settings Connection types page, the duplicated connection type appears in the list. 4.4. Editing a connection type As an OpenShift AI administrator, you can edit a connection type for users in your organization. Note that you cannot edit the connection types that are pre-installed with OpenShift AI. Instead, you have the option of duplicating a pre-installed connection type, as described in Duplicating a connection type . When you edit a connection type, your edits do not apply to any existing connections that users previously created. If you want to keep track of versions of this connection type, consider duplicating it instead of editing it. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. The connection type must exist and must not be a pre-installed connection type (which you are unable to edit). Procedure From the OpenShift AI dashboard, click Settings Connection types . From the list of available connection types, find the connection type that you want to edit. Click the Options menu , and then click Edit . The Edit connection type form appears. Edit the form fields and sections. Click Preview to open a preview of the connection form as it will appear to your users and verify that the form appears as you expect. Click Save . Verification In the Settings Connection types page, the duplicated connection type appears in the list. 4.5. Enabling a connection type As an OpenShift AI administrator, you can enable or disable a connection type to control whether it is available as an option to your users when they create a connection. Note that if you disable a connection type, any existing connections that your users created based on that connection type are not effected. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. The connection type that you want to enable exists in your project, either pre-installed or created by a user with administrator privileges. Procedure From the OpenShift AI dashboard, click Settings Connection types . From the list of available connection types, find the connection type that you want to enable or disable. On the row containing the connection type, click the toggle in the Enable column. Verification If you enabled a connection type, it is available for selection when a user adds a connection to a project resource (for example, a workbench or model server). If you disabled a connection type, it does not show in the list of available connection types when a user adds a connection to a project resource. 4.6. Deleting a connection type As an OpenShift AI administrator, you can delete a connection type that you or another administrator created. Note that you cannot delete the connection types that are pre-installed with OpenShift AI. Instead, you have the option of disabling them so that they are not visible to your users, as described in Enabling a connection type . Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. The connection type must exist and must not be a pre-installed connection type (which you are unable to delete). Procedure From the OpenShift AI dashboard, click Settings Connection types . From the list of available connection types, find the connection type that you want to delete. Optionally, you can select the Options menu and then click Preview to see how the related connection form appears to your users. Click the Options menu , and then click Delete . In the Delete connection type? form, type the name of the connection type that you want to delete and then click Delete . Verification In the Settings Connection types page, the connection type no longer appears in the list. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/managing_resources/managing-connection-types |
Chapter 8. Exposing custom application metrics for autoscaling | Chapter 8. Exposing custom application metrics for autoscaling You can export custom application metrics for the horizontal pod autoscaler. Important Prometheus Adapter is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 8.1. Exposing custom application metrics for horizontal pod autoscaling You can use the prometheus-adapter resource to expose custom application metrics for the horizontal pod autoscaler. Prerequisites You have a custom Prometheus instance installed as a Prometheus pod managed by a deployment or StatefulSet object but not by a Prometheus custom resource (CR). You have installed the custom Prometheus instance in a user-defined custom-prometheus project. Important Custom Prometheus instances and the Prometheus Operator installed through Operator Lifecycle Manager (OLM) are not compatible with user-defined monitoring if it is enabled. Therefore, custom Prometheus instances that are installed as a Prometheus custom resource (CR) managed by the OLM Prometheus Operator are not supported in OpenShift Container Platform. You have deployed an application and a service in a user-defined project. In this example, it is presumed that the application and its service monitor were installed in a user-defined custom-prometheus project. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file for your configuration. In this example, the file is called deploy.yaml . Add configuration details for creating the service account, roles, and role bindings for prometheus-adapter : kind: ServiceAccount apiVersion: v1 metadata: name: custom-metrics-apiserver namespace: custom-prometheus --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: custom-metrics-server-resources rules: - apiGroups: - custom.metrics.k8s.io resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: custom-metrics-resource-reader rules: - apiGroups: - "" resources: - namespaces - pods - services verbs: - get - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-metrics:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-prometheus --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: custom-metrics-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-prometheus --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-metrics-resource-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: custom-metrics-resource-reader subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-prometheus --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: hpa-controller-custom-metrics roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: custom-metrics-server-resources subjects: - kind: ServiceAccount name: horizontal-pod-autoscaler namespace: kube-system --- Add configuration details for the custom metrics for prometheus-adapter : apiVersion: v1 kind: ConfigMap metadata: name: adapter-config namespace: custom-prometheus data: config.yaml: | rules: - seriesQuery: 'http_requests_total{namespace!="",pod!=""}' 1 resources: overrides: namespace: {resource: "namespace"} pod: {resource: "pod"} service: {resource: "service"} name: matches: "^(.*)_total" as: "USD{1}_per_second" 2 metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>}[2m])) by (<<.GroupBy>>)' --- 1 Specifies the chosen metric to be the number of HTTP requests. 2 Specifies the frequency for the metric. Add configuration details for registering prometheus-adapter as an API service: apiVersion: v1 kind: Service metadata: annotations: service.beta.openshift.io/serving-cert-secret-name: prometheus-adapter-tls labels: name: prometheus-adapter name: prometheus-adapter namespace: custom-prometheus spec: ports: - name: https port: 443 targetPort: 6443 selector: app: prometheus-adapter type: ClusterIP --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.custom.metrics.k8s.io spec: service: name: prometheus-adapter namespace: custom-prometheus group: custom.metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100 --- List the Prometheus Adapter image: USD oc get -n openshift-monitoring deploy/prometheus-adapter -o jsonpath="{..image}" Add configuration details for deploying prometheus-adapter : apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-adapter name: prometheus-adapter namespace: custom-prometheus spec: replicas: 1 selector: matchLabels: app: prometheus-adapter template: metadata: labels: app: prometheus-adapter name: prometheus-adapter spec: serviceAccountName: custom-metrics-apiserver containers: - name: prometheus-adapter image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a46915a206cd7d97f240687c618dd59e8848fcc3a0f51e281f3384153a12c3e0 1 args: - --secure-port=6443 - --tls-cert-file=/var/run/serving-cert/tls.crt - --tls-private-key-file=/var/run/serving-cert/tls.key - --logtostderr=true - --prometheus-url=http://prometheus-operated.default.svc:9090/ - --metrics-relist-interval=1m - --v=4 - --config=/etc/adapter/config.yaml ports: - containerPort: 6443 volumeMounts: - mountPath: /var/run/serving-cert name: volume-serving-cert readOnly: true - mountPath: /etc/adapter/ name: config readOnly: true - mountPath: /tmp name: tmp-vol volumes: - name: volume-serving-cert secret: secretName: prometheus-adapter-tls - name: config configMap: name: adapter-config - name: tmp-vol emptyDir: {} 1 Specifies the Prometheus Adapter image found in the step. Apply the configuration to the cluster: USD oc apply -f deploy.yaml Example output serviceaccount/custom-metrics-apiserver created clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created clusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader created clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader created clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created configmap/adapter-config created service/prometheus-adapter created apiservice.apiregistration.k8s.io/v1.custom.metrics.k8s.io created deployment.apps/prometheus-adapter created Verify that the prometheus-adapter pod in your user-defined project is in a Running state. In this example the project is custom-prometheus : USD oc -n custom-prometheus get pods prometheus-adapter-<string> The metrics for the application are now exposed and they can be used to configure horizontal pod autoscaling. Additional resources See the horizontal pod autoscaling documentation See the Kubernetes documentation on horizontal pod autoscaler 8.2. steps Troubleshooting monitoring issues | [
"kind: ServiceAccount apiVersion: v1 metadata: name: custom-metrics-apiserver namespace: custom-prometheus --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: custom-metrics-server-resources rules: - apiGroups: - custom.metrics.k8s.io resources: [\"*\"] verbs: [\"*\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: custom-metrics-resource-reader rules: - apiGroups: - \"\" resources: - namespaces - pods - services verbs: - get - list --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-metrics:system:auth-delegator roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegator subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-prometheus --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: custom-metrics-auth-reader namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-prometheus --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: custom-metrics-resource-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: custom-metrics-resource-reader subjects: - kind: ServiceAccount name: custom-metrics-apiserver namespace: custom-prometheus --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: hpa-controller-custom-metrics roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: custom-metrics-server-resources subjects: - kind: ServiceAccount name: horizontal-pod-autoscaler namespace: kube-system ---",
"apiVersion: v1 kind: ConfigMap metadata: name: adapter-config namespace: custom-prometheus data: config.yaml: | rules: - seriesQuery: 'http_requests_total{namespace!=\"\",pod!=\"\"}' 1 resources: overrides: namespace: {resource: \"namespace\"} pod: {resource: \"pod\"} service: {resource: \"service\"} name: matches: \"^(.*)_total\" as: \"USD{1}_per_second\" 2 metricsQuery: 'sum(rate(<<.Series>>{<<.LabelMatchers>>}[2m])) by (<<.GroupBy>>)' ---",
"apiVersion: v1 kind: Service metadata: annotations: service.beta.openshift.io/serving-cert-secret-name: prometheus-adapter-tls labels: name: prometheus-adapter name: prometheus-adapter namespace: custom-prometheus spec: ports: - name: https port: 443 targetPort: 6443 selector: app: prometheus-adapter type: ClusterIP --- apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.custom.metrics.k8s.io spec: service: name: prometheus-adapter namespace: custom-prometheus group: custom.metrics.k8s.io version: v1beta1 insecureSkipTLSVerify: true groupPriorityMinimum: 100 versionPriority: 100 ---",
"oc get -n openshift-monitoring deploy/prometheus-adapter -o jsonpath=\"{..image}\"",
"apiVersion: apps/v1 kind: Deployment metadata: labels: app: prometheus-adapter name: prometheus-adapter namespace: custom-prometheus spec: replicas: 1 selector: matchLabels: app: prometheus-adapter template: metadata: labels: app: prometheus-adapter name: prometheus-adapter spec: serviceAccountName: custom-metrics-apiserver containers: - name: prometheus-adapter image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a46915a206cd7d97f240687c618dd59e8848fcc3a0f51e281f3384153a12c3e0 1 args: - --secure-port=6443 - --tls-cert-file=/var/run/serving-cert/tls.crt - --tls-private-key-file=/var/run/serving-cert/tls.key - --logtostderr=true - --prometheus-url=http://prometheus-operated.default.svc:9090/ - --metrics-relist-interval=1m - --v=4 - --config=/etc/adapter/config.yaml ports: - containerPort: 6443 volumeMounts: - mountPath: /var/run/serving-cert name: volume-serving-cert readOnly: true - mountPath: /etc/adapter/ name: config readOnly: true - mountPath: /tmp name: tmp-vol volumes: - name: volume-serving-cert secret: secretName: prometheus-adapter-tls - name: config configMap: name: adapter-config - name: tmp-vol emptyDir: {}",
"oc apply -f deploy.yaml",
"serviceaccount/custom-metrics-apiserver created clusterrole.rbac.authorization.k8s.io/custom-metrics-server-resources created clusterrole.rbac.authorization.k8s.io/custom-metrics-resource-reader created clusterrolebinding.rbac.authorization.k8s.io/custom-metrics:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/custom-metrics-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/custom-metrics-resource-reader created clusterrolebinding.rbac.authorization.k8s.io/hpa-controller-custom-metrics created configmap/adapter-config created service/prometheus-adapter created apiservice.apiregistration.k8s.io/v1.custom.metrics.k8s.io created deployment.apps/prometheus-adapter created",
"oc -n custom-prometheus get pods prometheus-adapter-<string>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/monitoring/exposing-custom-application-metrics-for-autoscaling |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Please let us know how we could make it better. To do so: Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.2_release_notes/proc_providing-feedback-on-red-hat-documentation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.